Test Report: KVM_Linux_crio 20506

                    
                      6319ed1cff2ab87f49806f23f2b58db8faa9bede:2025-04-01:38963
                    
                

Test fail (11/321)

x
+
TestAddons/parallel/Ingress (160.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-357468 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-357468 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-357468 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [87f5b89f-50aa-4374-b4de-f14b987a8435] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [87f5b89f-50aa-4374-b4de-f14b987a8435] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.004486602s
I0401 19:50:10.636092   16301 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-357468 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.256464121s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-357468 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.65
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-357468 -n addons-357468
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 logs -n 25: (1.284252775s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-774044                                                                     | download-only-774044 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| delete  | -p download-only-944346                                                                     | download-only-944346 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| delete  | -p download-only-774044                                                                     | download-only-774044 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-856518 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC |                     |
	|         | binary-mirror-856518                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41083                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-856518                                                                     | binary-mirror-856518 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| addons  | enable dashboard -p                                                                         | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC |                     |
	|         | addons-357468                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC |                     |
	|         | addons-357468                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-357468 --wait=true                                                                | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-357468 addons disable                                                                | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-357468 addons disable                                                                | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | -p addons-357468                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-357468 addons                                                                        | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-357468 addons                                                                        | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-357468 addons disable                                                                | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-357468 addons disable                                                                | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-357468 ip                                                                            | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	| addons  | addons-357468 addons disable                                                                | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-357468 addons                                                                        | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-357468 addons                                                                        | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:50 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-357468 ssh cat                                                                       | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:50 UTC | 01 Apr 25 19:50 UTC |
	|         | /opt/local-path-provisioner/pvc-0afaf634-c3c1-425b-9181-27260ba53259_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-357468 addons disable                                                                | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:50 UTC | 01 Apr 25 19:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-357468 ssh curl -s                                                                   | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:50 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-357468 addons                                                                        | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:50 UTC | 01 Apr 25 19:50 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-357468 addons                                                                        | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:50 UTC | 01 Apr 25 19:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-357468 ip                                                                            | addons-357468        | jenkins | v1.35.0 | 01 Apr 25 19:52 UTC | 01 Apr 25 19:52 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 19:45:41
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:45:41.909987   16998 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:45:41.910088   16998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:41.910093   16998 out.go:358] Setting ErrFile to fd 2...
	I0401 19:45:41.910099   16998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:41.910286   16998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 19:45:41.910885   16998 out.go:352] Setting JSON to false
	I0401 19:45:41.911646   16998 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1686,"bootTime":1743535056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:45:41.911699   16998 start.go:139] virtualization: kvm guest
	I0401 19:45:41.913636   16998 out.go:177] * [addons-357468] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:45:41.915110   16998 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 19:45:41.915121   16998 notify.go:220] Checking for updates...
	I0401 19:45:41.917786   16998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:45:41.919745   16998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 19:45:41.921014   16998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 19:45:41.922178   16998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:45:41.923356   16998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:45:41.924638   16998 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:45:41.956003   16998 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 19:45:41.957404   16998 start.go:297] selected driver: kvm2
	I0401 19:45:41.957424   16998 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:45:41.957438   16998 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:45:41.958139   16998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:45:41.958245   16998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:45:41.973196   16998 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 19:45:41.973261   16998 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:45:41.973557   16998 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:45:41.973598   16998 cni.go:84] Creating CNI manager for ""
	I0401 19:45:41.973651   16998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:45:41.973659   16998 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:45:41.973723   16998 start.go:340] cluster config:
	{Name:addons-357468 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-357468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:45:41.973837   16998 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:45:41.975750   16998 out.go:177] * Starting "addons-357468" primary control-plane node in "addons-357468" cluster
	I0401 19:45:41.976922   16998 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:45:41.976968   16998 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 19:45:41.976981   16998 cache.go:56] Caching tarball of preloaded images
	I0401 19:45:41.977080   16998 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:45:41.977092   16998 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 19:45:41.977398   16998 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/config.json ...
	I0401 19:45:41.977421   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/config.json: {Name:mkd18620016566f2989e07932c3fbccb92721f65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:45:41.977608   16998 start.go:360] acquireMachinesLock for addons-357468: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:45:41.977677   16998 start.go:364] duration metric: took 48.957µs to acquireMachinesLock for "addons-357468"
	I0401 19:45:41.977697   16998 start.go:93] Provisioning new machine with config: &{Name:addons-357468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-357468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:45:41.977769   16998 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 19:45:41.979968   16998 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0401 19:45:41.980096   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:45:41.980149   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:45:41.994389   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I0401 19:45:41.994852   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:45:41.995314   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:45:41.995339   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:45:41.995723   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:45:41.995889   16998 main.go:141] libmachine: (addons-357468) Calling .GetMachineName
	I0401 19:45:41.996050   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:45:41.996186   16998 start.go:159] libmachine.API.Create for "addons-357468" (driver="kvm2")
	I0401 19:45:41.996220   16998 client.go:168] LocalClient.Create starting
	I0401 19:45:41.996263   16998 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 19:45:42.125590   16998 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 19:45:42.444031   16998 main.go:141] libmachine: Running pre-create checks...
	I0401 19:45:42.444057   16998 main.go:141] libmachine: (addons-357468) Calling .PreCreateCheck
	I0401 19:45:42.444598   16998 main.go:141] libmachine: (addons-357468) Calling .GetConfigRaw
	I0401 19:45:42.445017   16998 main.go:141] libmachine: Creating machine...
	I0401 19:45:42.445032   16998 main.go:141] libmachine: (addons-357468) Calling .Create
	I0401 19:45:42.445175   16998 main.go:141] libmachine: (addons-357468) creating KVM machine...
	I0401 19:45:42.445187   16998 main.go:141] libmachine: (addons-357468) creating network...
	I0401 19:45:42.446602   16998 main.go:141] libmachine: (addons-357468) DBG | found existing default KVM network
	I0401 19:45:42.447371   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:42.447192   17021 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123560}
	I0401 19:45:42.447410   16998 main.go:141] libmachine: (addons-357468) DBG | created network xml: 
	I0401 19:45:42.447428   16998 main.go:141] libmachine: (addons-357468) DBG | <network>
	I0401 19:45:42.447435   16998 main.go:141] libmachine: (addons-357468) DBG |   <name>mk-addons-357468</name>
	I0401 19:45:42.447445   16998 main.go:141] libmachine: (addons-357468) DBG |   <dns enable='no'/>
	I0401 19:45:42.447456   16998 main.go:141] libmachine: (addons-357468) DBG |   
	I0401 19:45:42.447468   16998 main.go:141] libmachine: (addons-357468) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 19:45:42.447479   16998 main.go:141] libmachine: (addons-357468) DBG |     <dhcp>
	I0401 19:45:42.447488   16998 main.go:141] libmachine: (addons-357468) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 19:45:42.447494   16998 main.go:141] libmachine: (addons-357468) DBG |     </dhcp>
	I0401 19:45:42.447498   16998 main.go:141] libmachine: (addons-357468) DBG |   </ip>
	I0401 19:45:42.447512   16998 main.go:141] libmachine: (addons-357468) DBG |   
	I0401 19:45:42.447531   16998 main.go:141] libmachine: (addons-357468) DBG | </network>
	I0401 19:45:42.447554   16998 main.go:141] libmachine: (addons-357468) DBG | 
	I0401 19:45:42.452904   16998 main.go:141] libmachine: (addons-357468) DBG | trying to create private KVM network mk-addons-357468 192.168.39.0/24...
	I0401 19:45:42.514487   16998 main.go:141] libmachine: (addons-357468) DBG | private KVM network mk-addons-357468 192.168.39.0/24 created
	I0401 19:45:42.514534   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:42.514409   17021 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 19:45:42.514548   16998 main.go:141] libmachine: (addons-357468) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468 ...
	I0401 19:45:42.514565   16998 main.go:141] libmachine: (addons-357468) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 19:45:42.514588   16998 main.go:141] libmachine: (addons-357468) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 19:45:42.791225   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:42.791107   17021 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa...
	I0401 19:45:42.934775   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:42.934560   17021 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/addons-357468.rawdisk...
	I0401 19:45:42.934812   16998 main.go:141] libmachine: (addons-357468) DBG | Writing magic tar header
	I0401 19:45:42.934823   16998 main.go:141] libmachine: (addons-357468) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468 (perms=drwx------)
	I0401 19:45:42.934837   16998 main.go:141] libmachine: (addons-357468) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 19:45:42.934846   16998 main.go:141] libmachine: (addons-357468) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 19:45:42.934861   16998 main.go:141] libmachine: (addons-357468) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 19:45:42.934868   16998 main.go:141] libmachine: (addons-357468) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 19:45:42.934879   16998 main.go:141] libmachine: (addons-357468) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 19:45:42.934884   16998 main.go:141] libmachine: (addons-357468) creating domain...
	I0401 19:45:42.934914   16998 main.go:141] libmachine: (addons-357468) DBG | Writing SSH key tar header
	I0401 19:45:42.934946   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:42.934669   17021 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468 ...
	I0401 19:45:42.934976   16998 main.go:141] libmachine: (addons-357468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468
	I0401 19:45:42.934987   16998 main.go:141] libmachine: (addons-357468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 19:45:42.935000   16998 main.go:141] libmachine: (addons-357468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 19:45:42.935014   16998 main.go:141] libmachine: (addons-357468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 19:45:42.935027   16998 main.go:141] libmachine: (addons-357468) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 19:45:42.935038   16998 main.go:141] libmachine: (addons-357468) DBG | checking permissions on dir: /home/jenkins
	I0401 19:45:42.935049   16998 main.go:141] libmachine: (addons-357468) DBG | checking permissions on dir: /home
	I0401 19:45:42.935060   16998 main.go:141] libmachine: (addons-357468) DBG | skipping /home - not owner
	I0401 19:45:42.935847   16998 main.go:141] libmachine: (addons-357468) define libvirt domain using xml: 
	I0401 19:45:42.935874   16998 main.go:141] libmachine: (addons-357468) <domain type='kvm'>
	I0401 19:45:42.935884   16998 main.go:141] libmachine: (addons-357468)   <name>addons-357468</name>
	I0401 19:45:42.935893   16998 main.go:141] libmachine: (addons-357468)   <memory unit='MiB'>4000</memory>
	I0401 19:45:42.935904   16998 main.go:141] libmachine: (addons-357468)   <vcpu>2</vcpu>
	I0401 19:45:42.935913   16998 main.go:141] libmachine: (addons-357468)   <features>
	I0401 19:45:42.935924   16998 main.go:141] libmachine: (addons-357468)     <acpi/>
	I0401 19:45:42.935932   16998 main.go:141] libmachine: (addons-357468)     <apic/>
	I0401 19:45:42.935943   16998 main.go:141] libmachine: (addons-357468)     <pae/>
	I0401 19:45:42.935957   16998 main.go:141] libmachine: (addons-357468)     
	I0401 19:45:42.935968   16998 main.go:141] libmachine: (addons-357468)   </features>
	I0401 19:45:42.935975   16998 main.go:141] libmachine: (addons-357468)   <cpu mode='host-passthrough'>
	I0401 19:45:42.935998   16998 main.go:141] libmachine: (addons-357468)   
	I0401 19:45:42.936016   16998 main.go:141] libmachine: (addons-357468)   </cpu>
	I0401 19:45:42.936043   16998 main.go:141] libmachine: (addons-357468)   <os>
	I0401 19:45:42.936065   16998 main.go:141] libmachine: (addons-357468)     <type>hvm</type>
	I0401 19:45:42.936076   16998 main.go:141] libmachine: (addons-357468)     <boot dev='cdrom'/>
	I0401 19:45:42.936087   16998 main.go:141] libmachine: (addons-357468)     <boot dev='hd'/>
	I0401 19:45:42.936096   16998 main.go:141] libmachine: (addons-357468)     <bootmenu enable='no'/>
	I0401 19:45:42.936105   16998 main.go:141] libmachine: (addons-357468)   </os>
	I0401 19:45:42.936112   16998 main.go:141] libmachine: (addons-357468)   <devices>
	I0401 19:45:42.936123   16998 main.go:141] libmachine: (addons-357468)     <disk type='file' device='cdrom'>
	I0401 19:45:42.936138   16998 main.go:141] libmachine: (addons-357468)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/boot2docker.iso'/>
	I0401 19:45:42.936147   16998 main.go:141] libmachine: (addons-357468)       <target dev='hdc' bus='scsi'/>
	I0401 19:45:42.936155   16998 main.go:141] libmachine: (addons-357468)       <readonly/>
	I0401 19:45:42.936169   16998 main.go:141] libmachine: (addons-357468)     </disk>
	I0401 19:45:42.936181   16998 main.go:141] libmachine: (addons-357468)     <disk type='file' device='disk'>
	I0401 19:45:42.936191   16998 main.go:141] libmachine: (addons-357468)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 19:45:42.936206   16998 main.go:141] libmachine: (addons-357468)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/addons-357468.rawdisk'/>
	I0401 19:45:42.936216   16998 main.go:141] libmachine: (addons-357468)       <target dev='hda' bus='virtio'/>
	I0401 19:45:42.936226   16998 main.go:141] libmachine: (addons-357468)     </disk>
	I0401 19:45:42.936231   16998 main.go:141] libmachine: (addons-357468)     <interface type='network'>
	I0401 19:45:42.936253   16998 main.go:141] libmachine: (addons-357468)       <source network='mk-addons-357468'/>
	I0401 19:45:42.936268   16998 main.go:141] libmachine: (addons-357468)       <model type='virtio'/>
	I0401 19:45:42.936276   16998 main.go:141] libmachine: (addons-357468)     </interface>
	I0401 19:45:42.936290   16998 main.go:141] libmachine: (addons-357468)     <interface type='network'>
	I0401 19:45:42.936307   16998 main.go:141] libmachine: (addons-357468)       <source network='default'/>
	I0401 19:45:42.936325   16998 main.go:141] libmachine: (addons-357468)       <model type='virtio'/>
	I0401 19:45:42.936339   16998 main.go:141] libmachine: (addons-357468)     </interface>
	I0401 19:45:42.936353   16998 main.go:141] libmachine: (addons-357468)     <serial type='pty'>
	I0401 19:45:42.936368   16998 main.go:141] libmachine: (addons-357468)       <target port='0'/>
	I0401 19:45:42.936377   16998 main.go:141] libmachine: (addons-357468)     </serial>
	I0401 19:45:42.936391   16998 main.go:141] libmachine: (addons-357468)     <console type='pty'>
	I0401 19:45:42.936402   16998 main.go:141] libmachine: (addons-357468)       <target type='serial' port='0'/>
	I0401 19:45:42.936413   16998 main.go:141] libmachine: (addons-357468)     </console>
	I0401 19:45:42.936423   16998 main.go:141] libmachine: (addons-357468)     <rng model='virtio'>
	I0401 19:45:42.936433   16998 main.go:141] libmachine: (addons-357468)       <backend model='random'>/dev/random</backend>
	I0401 19:45:42.936446   16998 main.go:141] libmachine: (addons-357468)     </rng>
	I0401 19:45:42.936462   16998 main.go:141] libmachine: (addons-357468)     
	I0401 19:45:42.936479   16998 main.go:141] libmachine: (addons-357468)     
	I0401 19:45:42.936499   16998 main.go:141] libmachine: (addons-357468)   </devices>
	I0401 19:45:42.936511   16998 main.go:141] libmachine: (addons-357468) </domain>
	I0401 19:45:42.936520   16998 main.go:141] libmachine: (addons-357468) 
	I0401 19:45:42.942120   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:d8:ac:e9 in network default
	I0401 19:45:42.942608   16998 main.go:141] libmachine: (addons-357468) starting domain...
	I0401 19:45:42.942624   16998 main.go:141] libmachine: (addons-357468) ensuring networks are active...
	I0401 19:45:42.942631   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:42.943169   16998 main.go:141] libmachine: (addons-357468) Ensuring network default is active
	I0401 19:45:42.943530   16998 main.go:141] libmachine: (addons-357468) Ensuring network mk-addons-357468 is active
	I0401 19:45:42.943966   16998 main.go:141] libmachine: (addons-357468) getting domain XML...
	I0401 19:45:42.944541   16998 main.go:141] libmachine: (addons-357468) creating domain...
	I0401 19:45:44.352767   16998 main.go:141] libmachine: (addons-357468) waiting for IP...
	I0401 19:45:44.353654   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:44.354019   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:44.354109   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:44.354022   17021 retry.go:31] will retry after 252.629289ms: waiting for domain to come up
	I0401 19:45:44.608573   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:44.609055   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:44.609084   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:44.609013   17021 retry.go:31] will retry after 285.762447ms: waiting for domain to come up
	I0401 19:45:44.896308   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:44.896823   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:44.896850   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:44.896787   17021 retry.go:31] will retry after 477.162899ms: waiting for domain to come up
	I0401 19:45:45.375561   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:45.376081   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:45.376107   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:45.376026   17021 retry.go:31] will retry after 468.290613ms: waiting for domain to come up
	I0401 19:45:45.845566   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:45.845963   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:45.845990   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:45.845937   17021 retry.go:31] will retry after 726.480302ms: waiting for domain to come up
	I0401 19:45:46.573779   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:46.574189   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:46.574227   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:46.574163   17021 retry.go:31] will retry after 683.910527ms: waiting for domain to come up
	I0401 19:45:47.259729   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:47.260178   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:47.260203   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:47.260145   17021 retry.go:31] will retry after 905.581274ms: waiting for domain to come up
	I0401 19:45:48.167145   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:48.167593   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:48.167621   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:48.167559   17021 retry.go:31] will retry after 1.185315168s: waiting for domain to come up
	I0401 19:45:49.354919   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:49.355385   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:49.355437   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:49.355359   17021 retry.go:31] will retry after 1.278133314s: waiting for domain to come up
	I0401 19:45:50.635134   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:50.635577   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:50.635631   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:50.635549   17021 retry.go:31] will retry after 2.316004299s: waiting for domain to come up
	I0401 19:45:52.952802   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:52.953214   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:52.953241   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:52.953191   17021 retry.go:31] will retry after 2.686916478s: waiting for domain to come up
	I0401 19:45:55.642977   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:55.643323   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:55.643362   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:55.643292   17021 retry.go:31] will retry after 2.440580858s: waiting for domain to come up
	I0401 19:45:58.085568   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:45:58.086098   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:45:58.086123   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:45:58.086081   17021 retry.go:31] will retry after 3.468058976s: waiting for domain to come up
	I0401 19:46:01.558745   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:01.559231   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find current IP address of domain addons-357468 in network mk-addons-357468
	I0401 19:46:01.559252   16998 main.go:141] libmachine: (addons-357468) DBG | I0401 19:46:01.559197   17021 retry.go:31] will retry after 4.259113337s: waiting for domain to come up
	I0401 19:46:05.823119   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:05.823542   16998 main.go:141] libmachine: (addons-357468) found domain IP: 192.168.39.65
	I0401 19:46:05.823577   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has current primary IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:05.823586   16998 main.go:141] libmachine: (addons-357468) reserving static IP address...
	I0401 19:46:05.823947   16998 main.go:141] libmachine: (addons-357468) DBG | unable to find host DHCP lease matching {name: "addons-357468", mac: "52:54:00:2b:c8:c2", ip: "192.168.39.65"} in network mk-addons-357468
	I0401 19:46:05.893906   16998 main.go:141] libmachine: (addons-357468) DBG | Getting to WaitForSSH function...
	I0401 19:46:05.893936   16998 main.go:141] libmachine: (addons-357468) reserved static IP address 192.168.39.65 for domain addons-357468
	I0401 19:46:05.893948   16998 main.go:141] libmachine: (addons-357468) waiting for SSH...
	I0401 19:46:05.896388   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:05.896728   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:05.896758   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:05.896911   16998 main.go:141] libmachine: (addons-357468) DBG | Using SSH client type: external
	I0401 19:46:05.896938   16998 main.go:141] libmachine: (addons-357468) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa (-rw-------)
	I0401 19:46:05.896966   16998 main.go:141] libmachine: (addons-357468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:46:05.896983   16998 main.go:141] libmachine: (addons-357468) DBG | About to run SSH command:
	I0401 19:46:05.896995   16998 main.go:141] libmachine: (addons-357468) DBG | exit 0
	I0401 19:46:06.030473   16998 main.go:141] libmachine: (addons-357468) DBG | SSH cmd err, output: <nil>: 
	I0401 19:46:06.030719   16998 main.go:141] libmachine: (addons-357468) KVM machine creation complete
	I0401 19:46:06.031015   16998 main.go:141] libmachine: (addons-357468) Calling .GetConfigRaw
	I0401 19:46:06.031555   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:06.031735   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:06.031882   16998 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 19:46:06.031895   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:06.033076   16998 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 19:46:06.033089   16998 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 19:46:06.033096   16998 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 19:46:06.033103   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:06.035272   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.035584   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.035602   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.035739   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:06.035916   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.036035   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.036130   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:06.036303   16998 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:06.036494   16998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0401 19:46:06.036505   16998 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 19:46:06.145652   16998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:46:06.145677   16998 main.go:141] libmachine: Detecting the provisioner...
	I0401 19:46:06.145688   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:06.148440   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.148810   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.148850   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.149003   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:06.149257   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.149448   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.149611   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:06.149789   16998 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:06.150043   16998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0401 19:46:06.150057   16998 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 19:46:06.263053   16998 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 19:46:06.263128   16998 main.go:141] libmachine: found compatible host: buildroot
	I0401 19:46:06.263142   16998 main.go:141] libmachine: Provisioning with buildroot...
	I0401 19:46:06.263155   16998 main.go:141] libmachine: (addons-357468) Calling .GetMachineName
	I0401 19:46:06.263412   16998 buildroot.go:166] provisioning hostname "addons-357468"
	I0401 19:46:06.263439   16998 main.go:141] libmachine: (addons-357468) Calling .GetMachineName
	I0401 19:46:06.263630   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:06.266043   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.266380   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.266412   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.266557   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:06.266714   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.266882   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.267005   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:06.267169   16998 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:06.267375   16998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0401 19:46:06.267389   16998 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-357468 && echo "addons-357468" | sudo tee /etc/hostname
	I0401 19:46:06.393135   16998 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-357468
	
	I0401 19:46:06.393175   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:06.395743   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.396042   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.396096   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.396266   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:06.396440   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.396608   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.396698   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:06.396821   16998 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:06.397096   16998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0401 19:46:06.397120   16998 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-357468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-357468/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-357468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:46:06.520043   16998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:46:06.520074   16998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 19:46:06.520096   16998 buildroot.go:174] setting up certificates
	I0401 19:46:06.520107   16998 provision.go:84] configureAuth start
	I0401 19:46:06.520119   16998 main.go:141] libmachine: (addons-357468) Calling .GetMachineName
	I0401 19:46:06.520427   16998 main.go:141] libmachine: (addons-357468) Calling .GetIP
	I0401 19:46:06.523145   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.523571   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.523595   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.523787   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:06.525759   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.526054   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.526075   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.526152   16998 provision.go:143] copyHostCerts
	I0401 19:46:06.526246   16998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 19:46:06.526391   16998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 19:46:06.526490   16998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 19:46:06.526575   16998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.addons-357468 san=[127.0.0.1 192.168.39.65 addons-357468 localhost minikube]
	I0401 19:46:06.727557   16998 provision.go:177] copyRemoteCerts
	I0401 19:46:06.727615   16998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:46:06.727640   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:06.730443   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.730830   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.730857   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.731011   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:06.731173   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.731345   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:06.731500   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:06.816869   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 19:46:06.840973   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 19:46:06.864983   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:46:06.889179   16998 provision.go:87] duration metric: took 369.057155ms to configureAuth
	I0401 19:46:06.889214   16998 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:46:06.889451   16998 config.go:182] Loaded profile config "addons-357468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:46:06.889530   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:06.892626   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.892945   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:06.892974   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:06.893101   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:06.893301   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.893503   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:06.893637   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:06.893825   16998 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:06.894098   16998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0401 19:46:06.894114   16998 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:46:07.126608   16998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:46:07.126644   16998 main.go:141] libmachine: Checking connection to Docker...
	I0401 19:46:07.126657   16998 main.go:141] libmachine: (addons-357468) Calling .GetURL
	I0401 19:46:07.128059   16998 main.go:141] libmachine: (addons-357468) DBG | using libvirt version 6000000
	I0401 19:46:07.130623   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.130943   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:07.130964   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.131122   16998 main.go:141] libmachine: Docker is up and running!
	I0401 19:46:07.131143   16998 main.go:141] libmachine: Reticulating splines...
	I0401 19:46:07.131153   16998 client.go:171] duration metric: took 25.134921462s to LocalClient.Create
	I0401 19:46:07.131188   16998 start.go:167] duration metric: took 25.135003129s to libmachine.API.Create "addons-357468"
	I0401 19:46:07.131208   16998 start.go:293] postStartSetup for "addons-357468" (driver="kvm2")
	I0401 19:46:07.131219   16998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:46:07.131234   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:07.131485   16998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:46:07.131515   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:07.133516   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.133786   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:07.133824   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.133952   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:07.134134   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:07.134285   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:07.134407   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:07.221062   16998 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:46:07.225321   16998 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:46:07.225348   16998 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 19:46:07.225422   16998 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 19:46:07.225448   16998 start.go:296] duration metric: took 94.233561ms for postStartSetup
	I0401 19:46:07.225476   16998 main.go:141] libmachine: (addons-357468) Calling .GetConfigRaw
	I0401 19:46:07.226050   16998 main.go:141] libmachine: (addons-357468) Calling .GetIP
	I0401 19:46:07.228657   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.228950   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:07.228979   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.229246   16998 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/config.json ...
	I0401 19:46:07.229432   16998 start.go:128] duration metric: took 25.251651972s to createHost
	I0401 19:46:07.229457   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:07.231851   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.232102   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:07.232127   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.232276   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:07.232436   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:07.232572   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:07.232716   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:07.232862   16998 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:07.233074   16998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0401 19:46:07.233087   16998 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 19:46:07.347227   16998 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743536767.322610800
	
	I0401 19:46:07.347253   16998 fix.go:216] guest clock: 1743536767.322610800
	I0401 19:46:07.347262   16998 fix.go:229] Guest: 2025-04-01 19:46:07.3226108 +0000 UTC Remote: 2025-04-01 19:46:07.229442914 +0000 UTC m=+25.354370372 (delta=93.167886ms)
	I0401 19:46:07.347284   16998 fix.go:200] guest clock delta is within tolerance: 93.167886ms
	I0401 19:46:07.347291   16998 start.go:83] releasing machines lock for "addons-357468", held for 25.369601805s
	I0401 19:46:07.347339   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:07.347658   16998 main.go:141] libmachine: (addons-357468) Calling .GetIP
	I0401 19:46:07.350423   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.350823   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:07.350857   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.350997   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:07.351491   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:07.351675   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:07.351805   16998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:46:07.351849   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:07.351921   16998 ssh_runner.go:195] Run: cat /version.json
	I0401 19:46:07.351946   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:07.355641   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.355719   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.355994   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:07.356022   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.356120   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:07.356140   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:07.356142   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:07.356328   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:07.356342   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:07.356519   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:07.356522   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:07.356700   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:07.356696   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:07.356845   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:07.439329   16998 ssh_runner.go:195] Run: systemctl --version
	I0401 19:46:07.472074   16998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:46:07.637929   16998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:46:07.644981   16998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:46:07.645043   16998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:46:07.661405   16998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:46:07.661435   16998 start.go:495] detecting cgroup driver to use...
	I0401 19:46:07.661523   16998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:46:07.678509   16998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:46:07.692555   16998 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:46:07.692611   16998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:46:07.707799   16998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:46:07.722982   16998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:46:07.835279   16998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:46:07.967186   16998 docker.go:233] disabling docker service ...
	I0401 19:46:07.967264   16998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:46:07.983226   16998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:46:07.996181   16998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:46:08.135190   16998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:46:08.259443   16998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:46:08.274402   16998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:46:08.293202   16998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 19:46:08.293269   16998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:08.304813   16998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:46:08.304870   16998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:08.316681   16998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:08.327826   16998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:08.338966   16998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:46:08.350237   16998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:08.360830   16998 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:08.378356   16998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:08.389062   16998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:46:08.398983   16998 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:46:08.399035   16998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:46:08.412797   16998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:46:08.423321   16998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:46:08.540947   16998 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:46:08.639327   16998 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:46:08.639410   16998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:46:08.644120   16998 start.go:563] Will wait 60s for crictl version
	I0401 19:46:08.644199   16998 ssh_runner.go:195] Run: which crictl
	I0401 19:46:08.647949   16998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:46:08.688225   16998 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:46:08.688340   16998 ssh_runner.go:195] Run: crio --version
	I0401 19:46:08.717010   16998 ssh_runner.go:195] Run: crio --version
	I0401 19:46:08.746894   16998 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 19:46:08.748217   16998 main.go:141] libmachine: (addons-357468) Calling .GetIP
	I0401 19:46:08.751040   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:08.751311   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:08.751334   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:08.751576   16998 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:46:08.755999   16998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:46:08.769350   16998 kubeadm.go:883] updating cluster {Name:addons-357468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-357468 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:46:08.769438   16998 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:46:08.769475   16998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:46:08.800640   16998 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 19:46:08.800717   16998 ssh_runner.go:195] Run: which lz4
	I0401 19:46:08.804874   16998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 19:46:08.809004   16998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:46:08.809030   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0401 19:46:10.158631   16998 crio.go:462] duration metric: took 1.353778721s to copy over tarball
	I0401 19:46:10.158696   16998 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:46:12.464390   16998 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.305630564s)
	I0401 19:46:12.464440   16998 crio.go:469] duration metric: took 2.305781709s to extract the tarball
	I0401 19:46:12.464450   16998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:46:12.502110   16998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:46:12.551872   16998 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:46:12.551894   16998 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:46:12.551901   16998 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.32.2 crio true true} ...
	I0401 19:46:12.552008   16998 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-357468 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-357468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:46:12.552065   16998 ssh_runner.go:195] Run: crio config
	I0401 19:46:12.597742   16998 cni.go:84] Creating CNI manager for ""
	I0401 19:46:12.597765   16998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:46:12.597774   16998 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:46:12.597793   16998 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-357468 NodeName:addons-357468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:46:12.597906   16998 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-357468"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.65"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:46:12.597963   16998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 19:46:12.610140   16998 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:46:12.610224   16998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:46:12.621855   16998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0401 19:46:12.643137   16998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:46:12.663485   16998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0401 19:46:12.682513   16998 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0401 19:46:12.686889   16998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:46:12.702138   16998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:46:12.844811   16998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:46:12.861129   16998 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468 for IP: 192.168.39.65
	I0401 19:46:12.861152   16998 certs.go:194] generating shared ca certs ...
	I0401 19:46:12.861165   16998 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:12.861299   16998 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 19:46:13.043489   16998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt ...
	I0401 19:46:13.043525   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt: {Name:mkd7c7c2baba64b0187ea5172027e869ac2af429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.043706   16998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key ...
	I0401 19:46:13.043727   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key: {Name:mk75f3a8f3d5a6a1c71a0993d3a2bbfb17013fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.043830   16998 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 19:46:13.251084   16998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt ...
	I0401 19:46:13.251111   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt: {Name:mk16b0244cf000c8b0a277bdb2060208af436726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.251271   16998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key ...
	I0401 19:46:13.251283   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key: {Name:mkbcc9b770d38d1903366dc0ffc2a4621535f051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.251373   16998 certs.go:256] generating profile certs ...
	I0401 19:46:13.251456   16998 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.key
	I0401 19:46:13.251473   16998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt with IP's: []
	I0401 19:46:13.378351   16998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt ...
	I0401 19:46:13.378383   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: {Name:mk6238cc3e388b703b09c212154c5b9ccc3b3f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.378554   16998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.key ...
	I0401 19:46:13.378570   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.key: {Name:mk3a931dcb55a474560944963b2eebcb39c04dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.378663   16998 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.key.63182778
	I0401 19:46:13.378686   16998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.crt.63182778 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65]
	I0401 19:46:13.995712   16998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.crt.63182778 ...
	I0401 19:46:13.995744   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.crt.63182778: {Name:mk7fc2d86f0cc85a1f9b56ba604ab79e4ba7ecc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.995931   16998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.key.63182778 ...
	I0401 19:46:13.995947   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.key.63182778: {Name:mkac749f475b476bbb5dab157fcc9d4d556eeed8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:13.996049   16998 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.crt.63182778 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.crt
	I0401 19:46:13.996124   16998 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.key.63182778 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.key
	I0401 19:46:13.996180   16998 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.key
	I0401 19:46:13.996199   16998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.crt with IP's: []
	I0401 19:46:14.279006   16998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.crt ...
	I0401 19:46:14.279039   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.crt: {Name:mk1afffa409882b2bbfdcc4149146f46f0ca0be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:14.279219   16998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.key ...
	I0401 19:46:14.279233   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.key: {Name:mk680e6668afcc74fcdc5f465055838e804baf75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:14.279423   16998 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 19:46:14.279457   16998 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 19:46:14.279488   16998 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:46:14.279510   16998 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 19:46:14.280016   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:46:14.314477   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 19:46:14.342297   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:46:14.367328   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:46:14.391080   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:46:14.415309   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:46:14.457853   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:46:14.485174   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:46:14.512350   16998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:46:14.535770   16998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 19:46:14.552816   16998 ssh_runner.go:195] Run: openssl version
	I0401 19:46:14.558894   16998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:46:14.569842   16998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:46:14.574467   16998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:46:14.574526   16998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:46:14.580432   16998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:46:14.591652   16998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:46:14.595926   16998 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:46:14.595975   16998 kubeadm.go:392] StartCluster: {Name:addons-357468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-357468 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:46:14.596053   16998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:46:14.596141   16998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:46:14.635781   16998 cri.go:89] found id: ""
	I0401 19:46:14.635849   16998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:46:14.645818   16998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:46:14.655696   16998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:46:14.665234   16998 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:46:14.665254   16998 kubeadm.go:157] found existing configuration files:
	
	I0401 19:46:14.665299   16998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:46:14.674350   16998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:46:14.674430   16998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:46:14.683824   16998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:46:14.693019   16998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:46:14.693083   16998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:46:14.703902   16998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:46:14.713360   16998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:46:14.713430   16998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:46:14.723088   16998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:46:14.732138   16998 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:46:14.732205   16998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:46:14.741589   16998 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:46:14.909311   16998 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:46:24.704405   16998 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 19:46:24.704492   16998 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 19:46:24.704624   16998 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:46:24.704708   16998 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:46:24.704788   16998 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 19:46:24.704879   16998 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:46:24.706598   16998 out.go:235]   - Generating certificates and keys ...
	I0401 19:46:24.706701   16998 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 19:46:24.706798   16998 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 19:46:24.706905   16998 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:46:24.706996   16998 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:46:24.707086   16998 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:46:24.707159   16998 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 19:46:24.707236   16998 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 19:46:24.707373   16998 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-357468 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0401 19:46:24.707461   16998 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 19:46:24.707594   16998 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-357468 localhost] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0401 19:46:24.707685   16998 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:46:24.707804   16998 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:46:24.707883   16998 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 19:46:24.707967   16998 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:46:24.708037   16998 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:46:24.708119   16998 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:46:24.708193   16998 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:46:24.708279   16998 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:46:24.708367   16998 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:46:24.708470   16998 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:46:24.708561   16998 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:46:24.709903   16998 out.go:235]   - Booting up control plane ...
	I0401 19:46:24.710021   16998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:46:24.710096   16998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:46:24.710157   16998 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:46:24.710268   16998 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:46:24.710357   16998 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:46:24.710410   16998 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 19:46:24.710562   16998 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:46:24.710671   16998 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 19:46:24.710742   16998 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001052288s
	I0401 19:46:24.710806   16998 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:46:24.710859   16998 kubeadm.go:310] [api-check] The API server is healthy after 5.00216367s
	I0401 19:46:24.710949   16998 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:46:24.711080   16998 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:46:24.711157   16998 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:46:24.711318   16998 kubeadm.go:310] [mark-control-plane] Marking the node addons-357468 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:46:24.711383   16998 kubeadm.go:310] [bootstrap-token] Using token: t0cqmq.snz0vt75i8gyoye9
	I0401 19:46:24.713224   16998 out.go:235]   - Configuring RBAC rules ...
	I0401 19:46:24.713320   16998 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:46:24.713409   16998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:46:24.713559   16998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:46:24.713685   16998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:46:24.713786   16998 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:46:24.713893   16998 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:46:24.714065   16998 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:46:24.714132   16998 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 19:46:24.714200   16998 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 19:46:24.714228   16998 kubeadm.go:310] 
	I0401 19:46:24.714287   16998 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 19:46:24.714294   16998 kubeadm.go:310] 
	I0401 19:46:24.714387   16998 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 19:46:24.714398   16998 kubeadm.go:310] 
	I0401 19:46:24.714430   16998 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 19:46:24.714506   16998 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:46:24.714552   16998 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:46:24.714557   16998 kubeadm.go:310] 
	I0401 19:46:24.714623   16998 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 19:46:24.714635   16998 kubeadm.go:310] 
	I0401 19:46:24.714699   16998 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:46:24.714706   16998 kubeadm.go:310] 
	I0401 19:46:24.714779   16998 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 19:46:24.714884   16998 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:46:24.714966   16998 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:46:24.714978   16998 kubeadm.go:310] 
	I0401 19:46:24.715071   16998 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:46:24.715142   16998 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 19:46:24.715152   16998 kubeadm.go:310] 
	I0401 19:46:24.715236   16998 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t0cqmq.snz0vt75i8gyoye9 \
	I0401 19:46:24.715336   16998 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 \
	I0401 19:46:24.715377   16998 kubeadm.go:310] 	--control-plane 
	I0401 19:46:24.715386   16998 kubeadm.go:310] 
	I0401 19:46:24.715509   16998 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:46:24.715523   16998 kubeadm.go:310] 
	I0401 19:46:24.715657   16998 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t0cqmq.snz0vt75i8gyoye9 \
	I0401 19:46:24.715814   16998 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 
	I0401 19:46:24.715841   16998 cni.go:84] Creating CNI manager for ""
	I0401 19:46:24.715853   16998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:46:24.718075   16998 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:46:24.719330   16998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:46:24.731129   16998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0401 19:46:24.751879   16998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:46:24.751965   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:24.752008   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-357468 minikube.k8s.io/updated_at=2025_04_01T19_46_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=addons-357468 minikube.k8s.io/primary=true
	I0401 19:46:24.901745   16998 ops.go:34] apiserver oom_adj: -16
	I0401 19:46:24.901915   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:25.402596   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:25.902985   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:26.402803   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:26.902883   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:27.402295   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:27.902317   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:28.402000   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:28.903033   16998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:29.025879   16998 kubeadm.go:1113] duration metric: took 4.273984332s to wait for elevateKubeSystemPrivileges
	I0401 19:46:29.025921   16998 kubeadm.go:394] duration metric: took 14.429948905s to StartCluster
	I0401 19:46:29.025943   16998 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:29.026073   16998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 19:46:29.026434   16998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:29.026651   16998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 19:46:29.026660   16998 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:46:29.026762   16998 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0401 19:46:29.026902   16998 addons.go:69] Setting yakd=true in profile "addons-357468"
	I0401 19:46:29.026933   16998 addons.go:69] Setting default-storageclass=true in profile "addons-357468"
	I0401 19:46:29.026957   16998 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-357468"
	I0401 19:46:29.026971   16998 addons.go:69] Setting storage-provisioner=true in profile "addons-357468"
	I0401 19:46:29.026979   16998 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-357468"
	I0401 19:46:29.026983   16998 addons.go:69] Setting metrics-server=true in profile "addons-357468"
	I0401 19:46:29.026950   16998 addons.go:69] Setting ingress=true in profile "addons-357468"
	I0401 19:46:29.026979   16998 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-357468"
	I0401 19:46:29.026996   16998 addons.go:69] Setting registry=true in profile "addons-357468"
	I0401 19:46:29.027002   16998 addons.go:69] Setting volumesnapshots=true in profile "addons-357468"
	I0401 19:46:29.027004   16998 addons.go:238] Setting addon ingress=true in "addons-357468"
	I0401 19:46:29.027009   16998 addons.go:238] Setting addon registry=true in "addons-357468"
	I0401 19:46:29.027013   16998 addons.go:238] Setting addon volumesnapshots=true in "addons-357468"
	I0401 19:46:29.027015   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.027037   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.026923   16998 addons.go:69] Setting inspektor-gadget=true in profile "addons-357468"
	I0401 19:46:29.027043   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.027050   16998 addons.go:238] Setting addon inspektor-gadget=true in "addons-357468"
	I0401 19:46:29.027066   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.027037   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.027263   16998 addons.go:238] Setting addon storage-provisioner=true in "addons-357468"
	I0401 19:46:29.026957   16998 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-357468"
	I0401 19:46:29.027508   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.027527   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.026973   16998 addons.go:238] Setting addon yakd=true in "addons-357468"
	I0401 19:46:29.026952   16998 addons.go:69] Setting ingress-dns=true in profile "addons-357468"
	I0401 19:46:29.027547   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.027552   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.026942   16998 addons.go:69] Setting gcp-auth=true in profile "addons-357468"
	I0401 19:46:29.026983   16998 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-357468"
	I0401 19:46:29.027573   16998 mustload.go:65] Loading cluster: addons-357468
	I0401 19:46:29.027530   16998 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-357468"
	I0401 19:46:29.027598   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.027618   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.026947   16998 addons.go:69] Setting cloud-spanner=true in profile "addons-357468"
	I0401 19:46:29.027649   16998 addons.go:238] Setting addon cloud-spanner=true in "addons-357468"
	I0401 19:46:29.027663   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.027671   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.027721   16998 config.go:182] Loaded profile config "addons-357468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:46:29.027901   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.027926   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.027938   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.027961   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.028005   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.028027   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.028066   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.028094   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.028146   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.028178   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.028241   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.028287   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.027552   16998 addons.go:238] Setting addon ingress-dns=true in "addons-357468"
	I0401 19:46:29.028485   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.028853   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.028884   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.027573   16998 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-357468"
	I0401 19:46:29.026973   16998 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-357468"
	I0401 19:46:29.028961   16998 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-357468"
	I0401 19:46:29.028986   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.029344   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.029377   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.030901   16998 out.go:177] * Verifying Kubernetes components...
	I0401 19:46:29.028107   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.026920   16998 config.go:182] Loaded profile config "addons-357468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:46:29.027556   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.026995   16998 addons.go:238] Setting addon metrics-server=true in "addons-357468"
	I0401 19:46:29.031298   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.038346   16998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:46:29.027531   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.026992   16998 addons.go:69] Setting volcano=true in profile "addons-357468"
	I0401 19:46:29.038473   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.038529   16998 addons.go:238] Setting addon volcano=true in "addons-357468"
	I0401 19:46:29.038581   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.047732   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0401 19:46:29.047750   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0401 19:46:29.048198   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.048250   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.048508   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I0401 19:46:29.048789   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.048802   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.048805   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.048817   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.048816   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.049152   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.049225   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.049246   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.049322   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.049768   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.049926   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.050342   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.050386   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.050518   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.050548   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.051823   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I0401 19:46:29.052177   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.052605   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.052622   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.052997   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.053377   16998 addons.go:238] Setting addon default-storageclass=true in "addons-357468"
	I0401 19:46:29.053413   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.053546   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.053571   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.053736   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.053766   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.054268   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0401 19:46:29.062725   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.062777   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.062789   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.062822   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.063124   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
	I0401 19:46:29.062789   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.063226   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.065508   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.065615   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.066673   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.066715   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.069084   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.069107   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.069259   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.069271   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.078662   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.078753   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.078831   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0401 19:46:29.078941   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I0401 19:46:29.079337   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.079686   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.079729   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.079740   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.080230   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.080252   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.080632   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.081150   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.081167   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.081610   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.081666   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.082049   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.082082   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.082807   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.082847   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.084887   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0401 19:46:29.085346   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.085450   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.085978   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.085998   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.086059   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.087104   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.087715   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.087768   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.087995   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.088070   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0401 19:46:29.089280   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.089803   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.089832   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.090200   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.090979   16998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0401 19:46:29.091286   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.091323   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.093031   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I0401 19:46:29.093998   16998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0401 19:46:29.095307   16998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0401 19:46:29.096699   16998 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 19:46:29.096717   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0401 19:46:29.096745   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.100588   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.100983   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.101009   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.101245   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.101403   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.101514   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.101610   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.107055   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0401 19:46:29.107387   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.107819   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.107843   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.108199   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.108702   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.108740   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.112240   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0401 19:46:29.112801   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.113397   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.113414   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.113804   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.114018   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.116049   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.118553   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0401 19:46:29.118698   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
	I0401 19:46:29.119197   16998 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0401 19:46:29.119493   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.120231   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0401 19:46:29.120494   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.120515   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.120634   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0401 19:46:29.120688   16998 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0401 19:46:29.120704   16998 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0401 19:46:29.120733   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.121154   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.121648   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.121668   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.122116   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.122336   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.123031   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.124009   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.124028   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.124099   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.124154   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.124186   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.124199   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.124214   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.124486   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.124565   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39107
	I0401 19:46:29.124635   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43101
	I0401 19:46:29.124719   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.124930   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.125303   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.125321   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.125390   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.125447   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.126083   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.126382   16998 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-357468"
	I0401 19:46:29.126423   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:29.126504   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.126536   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.126759   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.126800   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.126832   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.127056   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.127166   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.127204   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.127246   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.127260   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.127612   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.127625   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.127695   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.127733   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.128047   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.128111   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.128515   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.128920   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.128948   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.129148   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.129646   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.129663   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.130274   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.130522   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.132149   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.132677   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.133567   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I0401 19:46:29.134277   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.134697   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.134716   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.135108   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.135229   16998 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0401 19:46:29.135288   16998 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0401 19:46:29.135377   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.136600   16998 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0401 19:46:29.136617   16998 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0401 19:46:29.136634   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.137415   16998 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0401 19:46:29.137430   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0401 19:46:29.137448   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.139518   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I0401 19:46:29.139954   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.140391   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
	I0401 19:46:29.141127   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.141140   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.141189   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.142140   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
	I0401 19:46:29.142151   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.142228   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.142257   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.142262   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.142277   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.142307   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.142609   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.142659   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.142774   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.142893   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.142910   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.142960   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.143166   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.143184   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.143238   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0401 19:46:29.143550   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.143946   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.144128   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.144192   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.144638   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.144654   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.145423   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.145430   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.145478   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.145510   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0401 19:46:29.145650   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.146030   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.146053   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.146275   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.146533   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.146576   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.146671   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.146928   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.147235   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.147841   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.147881   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.148088   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.148182   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0401 19:46:29.148242   16998 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0401 19:46:29.148610   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.148624   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.149005   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.149123   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.150016   16998 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 19:46:29.150038   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0401 19:46:29.150056   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.150463   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.151515   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0401 19:46:29.152318   16998 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0401 19:46:29.153182   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.153549   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.153711   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.153738   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.153857   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.153986   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.154141   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.154260   16998 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0401 19:46:29.154276   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0401 19:46:29.154292   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.154971   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0401 19:46:29.156315   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0401 19:46:29.157180   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.157625   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.157709   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.157864   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.158018   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.158194   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.158377   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.158885   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0401 19:46:29.160063   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0401 19:46:29.161217   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0401 19:46:29.162399   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0401 19:46:29.163885   16998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0401 19:46:29.163913   16998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0401 19:46:29.163937   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.164634   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0401 19:46:29.165088   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.165585   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.165609   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.166513   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.166876   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.169011   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.169063   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.169163   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0401 19:46:29.169180   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0401 19:46:29.169802   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.170529   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.170866   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.170882   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.170959   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.170981   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.171182   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.171246   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.171414   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.171468   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.171525   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I0401 19:46:29.171768   16998 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:46:29.171988   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.172289   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.172600   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.172776   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.172893   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0401 19:46:29.172971   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.173046   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.173191   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.173207   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.173338   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.173991   16998 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:46:29.174008   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:46:29.174026   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.174097   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.174099   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.174310   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.174594   16998 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0401 19:46:29.175749   16998 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 19:46:29.175767   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0401 19:46:29.175794   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.175913   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.176017   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.176644   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.176812   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.176825   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.177212   16998 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:46:29.177244   16998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:46:29.177260   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.177950   16998 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0401 19:46:29.178108   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.178492   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.179656   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.179984   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0401 19:46:29.180013   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.180030   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.180272   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.180317   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.180485   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.180650   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.180663   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.181126   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.181145   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.181208   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.181491   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.181540   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.181758   16998 out.go:177]   - Using image docker.io/registry:2.8.3
	I0401 19:46:29.182028   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:29.182057   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:29.182055   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.182074   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.182647   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.182847   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.183038   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.183148   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.183167   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.183199   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.183209   16998 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0401 19:46:29.183340   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.183372   16998 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0401 19:46:29.183384   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0401 19:46:29.183400   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.184364   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.184416   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.184621   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.184774   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.184900   16998 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:46:29.184915   16998 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:46:29.184930   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.188611   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.189192   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.189228   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.189246   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I0401 19:46:29.189915   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.189920   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.190185   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.190302   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.190669   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.190690   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.190703   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.190776   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.191038   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.191279   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.191706   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.191728   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.191882   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
	I0401 19:46:29.192238   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.192279   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.192446   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.192682   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.192844   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.192869   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.193018   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.193152   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.193211   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.193308   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	W0401 19:46:29.194688   16998 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57146->192.168.39.65:22: read: connection reset by peer
	I0401 19:46:29.194750   16998 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0401 19:46:29.194789   16998 retry.go:31] will retry after 199.487422ms: ssh: handshake failed: read tcp 192.168.39.1:57146->192.168.39.65:22: read: connection reset by peer
	I0401 19:46:29.194864   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.195087   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:29.195097   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:29.195246   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:29.195258   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:29.195259   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:29.195265   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:29.195284   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:29.195450   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:29.195463   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	W0401 19:46:29.195539   16998 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0401 19:46:29.196126   16998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0401 19:46:29.196140   16998 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0401 19:46:29.196152   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.199515   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.199931   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.199963   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.200165   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.200331   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.200565   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.200656   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:29.203694   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I0401 19:46:29.204070   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:29.204524   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:29.204546   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:29.204846   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:29.205033   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:29.206381   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:29.208009   16998 out.go:177]   - Using image docker.io/busybox:stable
	I0401 19:46:29.209495   16998 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0401 19:46:29.210881   16998 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 19:46:29.210905   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0401 19:46:29.210925   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:29.214027   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.214491   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:29.214515   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:29.214663   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:29.214855   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:29.215065   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:29.215233   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	W0401 19:46:29.218188   16998 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.65:22: read: connection reset by peer
	I0401 19:46:29.218239   16998 retry.go:31] will retry after 206.047586ms: ssh: handshake failed: read tcp 192.168.39.1:57158->192.168.39.65:22: read: connection reset by peer
	W0401 19:46:29.395179   16998 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57170->192.168.39.65:22: read: connection reset by peer
	I0401 19:46:29.395212   16998 retry.go:31] will retry after 438.228878ms: ssh: handshake failed: read tcp 192.168.39.1:57170->192.168.39.65:22: read: connection reset by peer
	I0401 19:46:29.532009   16998 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0401 19:46:29.532030   16998 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0401 19:46:29.590081   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0401 19:46:29.595488   16998 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:46:29.595679   16998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 19:46:29.610222   16998 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0401 19:46:29.610248   16998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0401 19:46:29.629283   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:46:29.698168   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 19:46:29.705194   16998 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0401 19:46:29.705214   16998 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0401 19:46:29.717613   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 19:46:29.730029   16998 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0401 19:46:29.730049   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0401 19:46:29.738839   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0401 19:46:29.749479   16998 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0401 19:46:29.749500   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0401 19:46:29.752442   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 19:46:29.775162   16998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0401 19:46:29.775188   16998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0401 19:46:29.815208   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:46:29.863064   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 19:46:29.886053   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0401 19:46:29.915514   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0401 19:46:29.934893   16998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0401 19:46:29.934915   16998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0401 19:46:30.000295   16998 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0401 19:46:30.000327   16998 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0401 19:46:30.008482   16998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0401 19:46:30.008508   16998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0401 19:46:30.229940   16998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0401 19:46:30.229967   16998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0401 19:46:30.258678   16998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0401 19:46:30.258702   16998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0401 19:46:30.319472   16998 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0401 19:46:30.319509   16998 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0401 19:46:30.474697   16998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:46:30.474722   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0401 19:46:30.517134   16998 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0401 19:46:30.517155   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0401 19:46:30.519914   16998 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0401 19:46:30.519941   16998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0401 19:46:30.531885   16998 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0401 19:46:30.531915   16998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0401 19:46:30.808804   16998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0401 19:46:30.808828   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0401 19:46:30.852090   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0401 19:46:30.859965   16998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:46:30.859992   16998 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:46:31.004998   16998 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0401 19:46:31.005023   16998 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0401 19:46:31.246577   16998 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:46:31.246606   16998 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:46:31.268486   16998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0401 19:46:31.268528   16998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0401 19:46:31.364069   16998 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.768549449s)
	I0401 19:46:31.364139   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.774016997s)
	I0401 19:46:31.364187   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:31.364207   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:31.364477   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:31.364492   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:31.364502   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:31.364510   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:31.364715   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:31.364728   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:31.364931   16998 node_ready.go:35] waiting up to 6m0s for node "addons-357468" to be "Ready" ...
	I0401 19:46:31.371015   16998 node_ready.go:49] node "addons-357468" has status "Ready":"True"
	I0401 19:46:31.371033   16998 node_ready.go:38] duration metric: took 6.085937ms for node "addons-357468" to be "Ready" ...
	I0401 19:46:31.371044   16998 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:46:31.382867   16998 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace to be "Ready" ...
	I0401 19:46:31.562488   16998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0401 19:46:31.562511   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0401 19:46:31.582348   16998 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 19:46:31.582372   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0401 19:46:31.659008   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:46:31.857027   16998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0401 19:46:31.857048   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0401 19:46:31.924130   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 19:46:32.048977   16998 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 19:46:32.049005   16998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0401 19:46:32.067113   16998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.471399738s)
	I0401 19:46:32.067155   16998 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 19:46:32.067168   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.437842094s)
	I0401 19:46:32.067224   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.369026155s)
	I0401 19:46:32.067227   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:32.067267   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:32.067268   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:32.067278   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:32.067635   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:32.067637   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:32.067656   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:32.067666   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:32.067674   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:32.067682   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:32.067681   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:32.067690   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:32.067701   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:32.067689   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:32.067990   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:32.067995   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:32.068007   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:32.068014   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:32.068039   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:32.068046   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:32.094340   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:32.094358   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:32.094638   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:32.094655   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:32.094662   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:32.250680   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 19:46:32.578973   16998 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-357468" context rescaled to 1 replicas
	I0401 19:46:33.621998   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:36.005057   16998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0401 19:46:36.005091   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:36.008404   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:36.008873   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:36.008898   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:36.009112   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:36.009285   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:36.009401   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:36.009539   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:36.032208   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:36.377907   16998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0401 19:46:36.411119   16998 addons.go:238] Setting addon gcp-auth=true in "addons-357468"
	I0401 19:46:36.411186   16998 host.go:66] Checking if "addons-357468" exists ...
	I0401 19:46:36.411519   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:36.411559   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:36.427654   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0401 19:46:36.428217   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:36.428735   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:36.428771   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:36.429132   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:36.429632   16998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:46:36.429669   16998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:46:36.445709   16998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I0401 19:46:36.446129   16998 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:46:36.446610   16998 main.go:141] libmachine: Using API Version  1
	I0401 19:46:36.446628   16998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:46:36.447001   16998 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:46:36.447163   16998 main.go:141] libmachine: (addons-357468) Calling .GetState
	I0401 19:46:36.448718   16998 main.go:141] libmachine: (addons-357468) Calling .DriverName
	I0401 19:46:36.448928   16998 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0401 19:46:36.448955   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHHostname
	I0401 19:46:36.451372   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:36.451721   16998 main.go:141] libmachine: (addons-357468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:c8:c2", ip: ""} in network mk-addons-357468: {Iface:virbr1 ExpiryTime:2025-04-01 20:45:57 +0000 UTC Type:0 Mac:52:54:00:2b:c8:c2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:addons-357468 Clientid:01:52:54:00:2b:c8:c2}
	I0401 19:46:36.451756   16998 main.go:141] libmachine: (addons-357468) DBG | domain addons-357468 has defined IP address 192.168.39.65 and MAC address 52:54:00:2b:c8:c2 in network mk-addons-357468
	I0401 19:46:36.451835   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHPort
	I0401 19:46:36.451984   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHKeyPath
	I0401 19:46:36.452131   16998 main.go:141] libmachine: (addons-357468) Calling .GetSSHUsername
	I0401 19:46:36.452294   16998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/addons-357468/id_rsa Username:docker}
	I0401 19:46:37.689347   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.971694785s)
	I0401 19:46:37.689406   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689409   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.950539579s)
	I0401 19:46:37.689418   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689465   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.937002886s)
	I0401 19:46:37.689445   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689511   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689520   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689536   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.874289862s)
	I0401 19:46:37.689522   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689555   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689565   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689581   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.826492284s)
	I0401 19:46:37.689605   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689615   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689615   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.803536764s)
	I0401 19:46:37.689632   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689643   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689703   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.774160879s)
	I0401 19:46:37.689725   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689733   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689810   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.837689757s)
	I0401 19:46:37.689827   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689837   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689855   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.689874   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.689902   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.689908   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.689916   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689922   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689936   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.030903953s)
	I0401 19:46:37.689952   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689955   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.689961   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.689967   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.689975   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.689980   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690179   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.690242   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.690258   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.690264   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690276   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690279   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690285   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.690280   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690292   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690302   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690312   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.690319   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690321   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690328   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690338   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.690344   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690286   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690511   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.690538   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690544   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690551   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.690557   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690578   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.690600   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690603   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690607   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690610   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690614   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.690617   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.690621   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690624   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690663   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.690682   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690688   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690694   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.690701   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.690890   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.690916   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.690923   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.690302   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.691650   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.691676   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.691682   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.691881   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.691906   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.691913   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.692069   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.692103   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.692121   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.692130   16998 addons.go:479] Verifying addon ingress=true in "addons-357468"
	I0401 19:46:37.692627   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.692647   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.692670   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.692676   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.692684   16998 addons.go:479] Verifying addon metrics-server=true in "addons-357468"
	I0401 19:46:37.693046   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.693057   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.693068   16998 addons.go:479] Verifying addon registry=true in "addons-357468"
	I0401 19:46:37.694149   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.694205   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.694251   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.694303   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.694329   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.694662   16998 out.go:177] * Verifying ingress addon...
	I0401 19:46:37.695779   16998 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-357468 service yakd-dashboard -n yakd-dashboard
	
	I0401 19:46:37.695828   16998 out.go:177] * Verifying registry addon...
	I0401 19:46:37.696692   16998 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0401 19:46:37.698008   16998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0401 19:46:37.723166   16998 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 19:46:37.723187   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:37.723166   16998 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0401 19:46:37.723200   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:37.778239   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:37.778269   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:37.778639   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:37.778690   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:37.778661   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:37.978458   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.054286105s)
	W0401 19:46:37.978511   16998 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 19:46:37.978532   16998 retry.go:31] will retry after 280.54106ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 19:46:38.219635   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:38.220320   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:38.259463   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 19:46:38.414636   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:38.734817   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:38.737409   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:38.931607   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.68087891s)
	I0401 19:46:38.931647   16998 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.482699535s)
	I0401 19:46:38.931662   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:38.931674   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:38.932095   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:38.932111   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:38.932116   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:38.932133   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:38.932142   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:38.932429   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:38.932445   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:38.932459   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:38.932470   16998 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-357468"
	I0401 19:46:38.933606   16998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0401 19:46:38.934413   16998 out.go:177] * Verifying csi-hostpath-driver addon...
	I0401 19:46:38.935703   16998 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0401 19:46:38.936419   16998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0401 19:46:38.936701   16998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0401 19:46:38.936715   16998 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0401 19:46:38.962978   16998 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0401 19:46:38.962998   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:39.016411   16998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0401 19:46:39.016437   16998 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0401 19:46:39.078509   16998 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 19:46:39.078540   16998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0401 19:46:39.149004   16998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 19:46:39.205578   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:39.206018   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:39.441447   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:39.701669   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:39.702678   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:39.940040   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:40.201582   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:40.201676   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:40.439811   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:40.558378   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.298873759s)
	I0401 19:46:40.558443   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:40.558461   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:40.558742   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:40.558768   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:40.558779   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:40.558787   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:40.559047   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:40.559061   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:40.705114   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:40.705154   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:40.867556   16998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.718509871s)
	I0401 19:46:40.867608   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:40.867620   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:40.867881   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:40.867895   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:40.867905   16998 main.go:141] libmachine: Making call to close driver server
	I0401 19:46:40.867911   16998 main.go:141] libmachine: (addons-357468) Calling .Close
	I0401 19:46:40.868204   16998 main.go:141] libmachine: (addons-357468) DBG | Closing plugin on server side
	I0401 19:46:40.868245   16998 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:46:40.868255   16998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:46:40.869216   16998 addons.go:479] Verifying addon gcp-auth=true in "addons-357468"
	I0401 19:46:40.871699   16998 out.go:177] * Verifying gcp-auth addon...
	I0401 19:46:40.873390   16998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0401 19:46:40.901547   16998 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0401 19:46:40.901566   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:40.918181   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:41.007005   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:41.200057   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:41.201882   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:41.376676   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:41.439842   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:41.701644   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:41.702103   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:41.877008   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:41.941284   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:42.200092   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:42.201719   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:42.377513   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:42.441328   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:42.701343   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:42.701358   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:42.876305   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:42.948890   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:43.201938   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:43.202336   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:43.376034   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:43.388156   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:43.440387   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:43.701064   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:43.701176   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:43.877297   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:43.940663   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:44.201137   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:44.202467   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:44.376768   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:44.527487   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:44.752043   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:44.752472   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:44.876377   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:44.939736   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:45.200985   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:45.203015   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:45.377228   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:45.440882   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:45.701201   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:45.701436   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:45.876876   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:45.888621   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:45.939650   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:46.199805   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:46.201801   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:46.376688   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:46.439624   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:46.701012   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:46.702709   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:46.877313   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:46.939130   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:47.201265   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:47.202645   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:47.376911   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:47.440134   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:47.716282   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:47.716766   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:47.877398   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:47.888971   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:47.940435   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:48.200671   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:48.201929   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:48.377087   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:48.439857   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:48.701841   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:48.702075   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:48.877451   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:48.939984   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:49.200468   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:49.201555   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:49.376714   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:49.440509   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:49.702076   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:49.702311   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:49.877284   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:49.889115   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:49.940741   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:50.200167   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:50.207055   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:50.376861   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:50.439982   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:50.700197   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:50.700936   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:50.877690   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:50.939561   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:51.201870   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:51.202028   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:51.377105   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:51.440739   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:51.699685   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:51.701723   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:51.876578   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:51.940074   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:52.202133   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:52.202143   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:52.377247   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:52.387907   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:52.440493   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:52.701784   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:52.701965   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:52.877773   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:52.940844   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:53.201102   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:53.201315   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:53.378605   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:53.440122   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:53.701660   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:53.702487   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:53.876911   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:53.940051   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:54.200395   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:54.201026   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:54.377130   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:54.388249   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:54.440429   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:54.946585   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:54.946785   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:54.946876   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:54.947124   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:55.200552   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:55.201175   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:55.377065   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:55.440011   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:55.700191   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:55.701513   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:55.876500   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:55.939104   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:56.200449   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:56.201032   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:56.377094   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:56.439312   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:56.701860   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:56.701955   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:56.877053   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:56.888187   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:56.940875   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:57.435706   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:57.435753   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:57.435925   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:57.441261   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:57.700513   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:57.701340   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:57.876020   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:57.940453   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:58.202069   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:58.202893   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:58.377804   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:58.440454   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:58.701808   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:58.701957   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:58.876686   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:58.940327   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:59.199967   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:59.204195   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:59.909198   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:59.909310   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:59.911700   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:59.912048   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:59.913579   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:46:59.916217   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:46:59.941076   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:00.201949   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:00.202262   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:00.377966   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:00.439957   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:00.701076   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:00.701709   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:00.876592   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:00.941130   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:01.201027   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:01.202355   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:01.376088   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:01.439676   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:01.700885   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:01.701883   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:01.881980   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:01.982488   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:02.201311   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:02.201830   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:02.377044   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:02.387246   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:02.440731   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:02.699603   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:02.701300   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:02.877495   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:02.940600   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:03.199749   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:03.201723   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:03.376953   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:03.440395   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:03.700987   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:03.701174   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:03.877445   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:03.941448   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:04.201014   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:04.201336   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:04.377504   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:04.389397   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:04.440436   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:04.700721   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:04.702161   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:04.877159   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:04.940278   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:05.200362   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:05.202411   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:05.376713   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:05.440085   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:05.701447   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:05.701555   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:05.876363   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:05.949523   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:06.200340   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:06.200996   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:06.377199   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:06.441665   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:06.699929   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:06.701549   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:06.876610   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:06.888186   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:06.958871   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:07.200160   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:07.202131   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:07.377571   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:07.439340   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:07.700556   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:07.701717   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:07.876656   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:07.940176   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:08.202296   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:08.202424   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:08.376182   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:08.439450   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:08.700485   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:08.701307   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:08.877798   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:08.888929   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:08.940375   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:09.202858   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:09.202870   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:09.377036   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:09.440547   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:09.700677   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:09.702562   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:09.876457   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:09.939989   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:10.201174   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:10.201307   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:10.376130   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:10.440695   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:10.699891   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:10.701645   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:10.876975   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:10.940370   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:11.200667   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:11.201775   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:11.376838   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:11.388518   16998 pod_ready.go:103] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:11.439755   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:11.699699   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:11.701288   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:11.877134   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:11.941020   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:12.200809   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:12.201326   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:12.376065   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:12.387356   16998 pod_ready.go:93] pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:12.387375   16998 pod_ready.go:82] duration metric: took 41.004482811s for pod "amd-gpu-device-plugin-bcc9r" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.387383   16998 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-7nvdn" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.390311   16998 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-7nvdn" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-7nvdn" not found
	I0401 19:47:12.390328   16998 pod_ready.go:82] duration metric: took 2.939066ms for pod "coredns-668d6bf9bc-7nvdn" in "kube-system" namespace to be "Ready" ...
	E0401 19:47:12.390336   16998 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-7nvdn" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-7nvdn" not found
	I0401 19:47:12.390342   16998 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-cfh7q" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.394249   16998 pod_ready.go:93] pod "coredns-668d6bf9bc-cfh7q" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:12.394268   16998 pod_ready.go:82] duration metric: took 3.919864ms for pod "coredns-668d6bf9bc-cfh7q" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.394277   16998 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.398333   16998 pod_ready.go:93] pod "etcd-addons-357468" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:12.398354   16998 pod_ready.go:82] duration metric: took 4.06919ms for pod "etcd-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.398365   16998 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.402517   16998 pod_ready.go:93] pod "kube-apiserver-addons-357468" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:12.402532   16998 pod_ready.go:82] duration metric: took 4.160815ms for pod "kube-apiserver-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.402540   16998 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.441262   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:12.585439   16998 pod_ready.go:93] pod "kube-controller-manager-addons-357468" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:12.585462   16998 pod_ready.go:82] duration metric: took 182.915613ms for pod "kube-controller-manager-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.585473   16998 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rm6gh" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.700766   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:12.701360   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:12.876935   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:12.940478   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:12.985955   16998 pod_ready.go:93] pod "kube-proxy-rm6gh" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:12.985977   16998 pod_ready.go:82] duration metric: took 400.497834ms for pod "kube-proxy-rm6gh" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:12.985986   16998 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:13.205106   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:13.303384   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:13.377100   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:13.386600   16998 pod_ready.go:93] pod "kube-scheduler-addons-357468" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:13.386627   16998 pod_ready.go:82] duration metric: took 400.634271ms for pod "kube-scheduler-addons-357468" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:13.386638   16998 pod_ready.go:39] duration metric: took 42.015579181s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:47:13.386659   16998 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:47:13.386736   16998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:47:13.430144   16998 api_server.go:72] duration metric: took 44.403448606s to wait for apiserver process to appear ...
	I0401 19:47:13.430170   16998 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:47:13.430191   16998 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0401 19:47:13.435523   16998 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I0401 19:47:13.436770   16998 api_server.go:141] control plane version: v1.32.2
	I0401 19:47:13.436794   16998 api_server.go:131] duration metric: took 6.617025ms to wait for apiserver health ...
	I0401 19:47:13.436803   16998 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:47:13.439685   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:13.589930   16998 system_pods.go:59] 18 kube-system pods found
	I0401 19:47:13.589971   16998 system_pods.go:61] "amd-gpu-device-plugin-bcc9r" [c23b66cf-45c7-4934-893d-bb20072b09e3] Running
	I0401 19:47:13.589980   16998 system_pods.go:61] "coredns-668d6bf9bc-cfh7q" [f1e19243-d755-429f-adf4-cbba1763647f] Running
	I0401 19:47:13.589990   16998 system_pods.go:61] "csi-hostpath-attacher-0" [1d309d5f-aa0a-4c35-a70e-273f297e4e07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 19:47:13.590000   16998 system_pods.go:61] "csi-hostpath-resizer-0" [3ff0c75b-f68d-4d93-82fe-027230ee0090] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 19:47:13.590015   16998 system_pods.go:61] "csi-hostpathplugin-qn7g7" [af407583-e9cd-4c5c-93b2-29c17c4da3d6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 19:47:13.590026   16998 system_pods.go:61] "etcd-addons-357468" [4614fce5-5e01-469f-ac16-b52195876186] Running
	I0401 19:47:13.590035   16998 system_pods.go:61] "kube-apiserver-addons-357468" [c07e64ba-3a62-40c7-acad-862316fabe10] Running
	I0401 19:47:13.590042   16998 system_pods.go:61] "kube-controller-manager-addons-357468" [1a6c60a3-4b77-4d48-86f5-e1b4d3060eb2] Running
	I0401 19:47:13.590051   16998 system_pods.go:61] "kube-ingress-dns-minikube" [6cf917b0-6ad6-4a63-ad55-3d3e9fbc5a29] Running
	I0401 19:47:13.590059   16998 system_pods.go:61] "kube-proxy-rm6gh" [9a55728a-36d5-4139-aee2-c3c8d64a56b6] Running
	I0401 19:47:13.590065   16998 system_pods.go:61] "kube-scheduler-addons-357468" [21726162-bb55-4472-9936-40adb373b308] Running
	I0401 19:47:13.590076   16998 system_pods.go:61] "metrics-server-7fbb699795-5gb2p" [8290b608-2b6f-48d1-b0e9-b3224861fc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:47:13.590084   16998 system_pods.go:61] "nvidia-device-plugin-daemonset-vtmjq" [62241ccb-0ac0-423e-b2ec-9c837985d9ab] Running
	I0401 19:47:13.590095   16998 system_pods.go:61] "registry-6c88467877-9sz7s" [6aaae033-47aa-4ef3-84b0-7a7e433ed652] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0401 19:47:13.590106   16998 system_pods.go:61] "registry-proxy-tr78m" [2b69a23c-961a-4f69-bdf5-7b655e5ab42c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0401 19:47:13.590118   16998 system_pods.go:61] "snapshot-controller-68b874b76f-58kh8" [3cb6698d-08a0-410c-9d69-0760d8256362] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 19:47:13.590131   16998 system_pods.go:61] "snapshot-controller-68b874b76f-vbdbd" [69f8a0e9-05fc-480c-971e-8b11cf4fec68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 19:47:13.590140   16998 system_pods.go:61] "storage-provisioner" [cdf1e6c5-ba2e-4657-bee2-12904cbe2cb7] Running
	I0401 19:47:13.590150   16998 system_pods.go:74] duration metric: took 153.341144ms to wait for pod list to return data ...
	I0401 19:47:13.590163   16998 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:47:13.701297   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:13.701589   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:13.786567   16998 default_sa.go:45] found service account: "default"
	I0401 19:47:13.786590   16998 default_sa.go:55] duration metric: took 196.418258ms for default service account to be created ...
	I0401 19:47:13.786598   16998 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:47:13.876557   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:13.943604   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:13.991452   16998 system_pods.go:86] 18 kube-system pods found
	I0401 19:47:13.991491   16998 system_pods.go:89] "amd-gpu-device-plugin-bcc9r" [c23b66cf-45c7-4934-893d-bb20072b09e3] Running
	I0401 19:47:13.991501   16998 system_pods.go:89] "coredns-668d6bf9bc-cfh7q" [f1e19243-d755-429f-adf4-cbba1763647f] Running
	I0401 19:47:13.991512   16998 system_pods.go:89] "csi-hostpath-attacher-0" [1d309d5f-aa0a-4c35-a70e-273f297e4e07] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 19:47:13.991529   16998 system_pods.go:89] "csi-hostpath-resizer-0" [3ff0c75b-f68d-4d93-82fe-027230ee0090] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 19:47:13.991540   16998 system_pods.go:89] "csi-hostpathplugin-qn7g7" [af407583-e9cd-4c5c-93b2-29c17c4da3d6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 19:47:13.991552   16998 system_pods.go:89] "etcd-addons-357468" [4614fce5-5e01-469f-ac16-b52195876186] Running
	I0401 19:47:13.991559   16998 system_pods.go:89] "kube-apiserver-addons-357468" [c07e64ba-3a62-40c7-acad-862316fabe10] Running
	I0401 19:47:13.991564   16998 system_pods.go:89] "kube-controller-manager-addons-357468" [1a6c60a3-4b77-4d48-86f5-e1b4d3060eb2] Running
	I0401 19:47:13.991571   16998 system_pods.go:89] "kube-ingress-dns-minikube" [6cf917b0-6ad6-4a63-ad55-3d3e9fbc5a29] Running
	I0401 19:47:13.991576   16998 system_pods.go:89] "kube-proxy-rm6gh" [9a55728a-36d5-4139-aee2-c3c8d64a56b6] Running
	I0401 19:47:13.991584   16998 system_pods.go:89] "kube-scheduler-addons-357468" [21726162-bb55-4472-9936-40adb373b308] Running
	I0401 19:47:13.991592   16998 system_pods.go:89] "metrics-server-7fbb699795-5gb2p" [8290b608-2b6f-48d1-b0e9-b3224861fc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:47:13.991597   16998 system_pods.go:89] "nvidia-device-plugin-daemonset-vtmjq" [62241ccb-0ac0-423e-b2ec-9c837985d9ab] Running
	I0401 19:47:13.991606   16998 system_pods.go:89] "registry-6c88467877-9sz7s" [6aaae033-47aa-4ef3-84b0-7a7e433ed652] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0401 19:47:13.991613   16998 system_pods.go:89] "registry-proxy-tr78m" [2b69a23c-961a-4f69-bdf5-7b655e5ab42c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0401 19:47:13.991623   16998 system_pods.go:89] "snapshot-controller-68b874b76f-58kh8" [3cb6698d-08a0-410c-9d69-0760d8256362] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 19:47:13.991642   16998 system_pods.go:89] "snapshot-controller-68b874b76f-vbdbd" [69f8a0e9-05fc-480c-971e-8b11cf4fec68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 19:47:13.991647   16998 system_pods.go:89] "storage-provisioner" [cdf1e6c5-ba2e-4657-bee2-12904cbe2cb7] Running
	I0401 19:47:13.991657   16998 system_pods.go:126] duration metric: took 205.053161ms to wait for k8s-apps to be running ...
	I0401 19:47:13.991667   16998 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:47:13.991721   16998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:47:14.025660   16998 system_svc.go:56] duration metric: took 33.983884ms WaitForService to wait for kubelet
	I0401 19:47:14.025698   16998 kubeadm.go:582] duration metric: took 44.99900873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:47:14.025722   16998 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:47:14.397479   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:14.398400   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:14.398500   16998 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:47:14.398521   16998 node_conditions.go:123] node cpu capacity is 2
	I0401 19:47:14.398533   16998 node_conditions.go:105] duration metric: took 372.804916ms to run NodePressure ...
	I0401 19:47:14.398544   16998 start.go:241] waiting for startup goroutines ...
	I0401 19:47:14.398810   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:14.440461   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:14.702362   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:14.702424   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:14.876356   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:14.942065   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:15.202512   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:15.207535   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:15.376969   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:15.440384   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:15.701521   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:15.702595   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:15.876903   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:15.940060   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:16.201046   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:16.201958   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:16.379940   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:16.481096   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:16.700869   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:16.701646   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:16.876575   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:16.940424   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:17.200935   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:17.202287   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:17.377337   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:17.440919   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:17.700849   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:17.702197   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:17.878772   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:17.979781   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:18.199697   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:18.201305   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:18.376681   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:18.477461   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:18.700861   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:18.701806   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:18.876627   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:18.940607   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:19.201018   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:19.201155   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:19.412487   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:19.439363   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:19.702112   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:19.702246   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:19.879049   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:19.940389   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:20.200486   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:20.200998   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:20.376563   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:20.439806   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:20.700543   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:20.701779   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:20.876747   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:20.939811   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:21.199731   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:21.201535   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:21.376357   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:21.440427   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:21.700604   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:21.701847   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:21.876678   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:21.939994   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:22.200507   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:22.201909   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:22.376862   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:22.441122   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:22.700136   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:22.701053   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:22.876751   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:22.939652   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:23.200694   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:23.201635   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:23.376911   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:23.440108   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:23.702011   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:23.702069   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:23.877420   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:23.940668   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:24.199753   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:24.201950   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:24.376878   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:24.442874   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:24.701397   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:24.704639   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:24.876615   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:24.940133   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:25.201810   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:25.201927   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:25.376781   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:25.439767   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:25.699992   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:25.701724   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:25.877633   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:25.940105   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:26.202256   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:26.202282   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:26.377104   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:26.440545   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:26.701742   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:26.701747   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:26.876412   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:26.939748   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:27.201305   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:27.202290   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:27.377575   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:27.440905   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:27.699851   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:27.700806   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:27.876449   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:27.940386   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:28.201388   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:28.201675   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:28.376772   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:28.444962   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:28.701073   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:28.701202   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:28.877344   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:28.940410   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:29.200479   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:29.202401   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:29.376482   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:29.439657   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:29.699590   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:29.701150   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:29.877631   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:29.940087   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:30.200467   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:30.202354   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:30.376477   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:30.440015   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:30.701764   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:30.701899   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:30.877122   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:30.940187   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:31.201726   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:31.201850   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:31.376305   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:31.441116   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:31.700389   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:31.701286   16998 kapi.go:107] duration metric: took 54.003275244s to wait for kubernetes.io/minikube-addons=registry ...
	I0401 19:47:31.877374   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:31.941081   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:32.200739   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:32.377046   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:32.440361   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:32.701733   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:32.876573   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:32.940500   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:33.202654   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:33.376713   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:33.440627   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:33.699935   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:33.877486   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:33.980610   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:34.201251   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:34.377747   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:34.441173   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:34.702114   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:34.877310   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:34.941213   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:35.200735   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:35.376952   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:35.440729   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:35.701022   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:35.876976   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:35.940126   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:36.200638   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:36.377062   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:36.440366   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:36.700898   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:36.877069   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:36.940810   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:37.202224   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:37.835706   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:37.836618   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:37.844734   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:37.886435   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:37.941027   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:38.200338   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:38.377518   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:38.440563   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:38.701077   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:38.877513   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:38.939742   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:39.199781   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:39.376411   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:39.441551   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:39.700739   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:39.877630   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:39.940379   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:40.200771   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:40.376687   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:40.439864   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:40.712700   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:41.090558   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:41.091198   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:41.200098   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:41.377642   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:41.440616   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:41.702871   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:41.876614   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:41.939832   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:42.200238   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:42.377260   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:42.441097   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:42.700478   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:42.876908   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:42.940797   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:43.204429   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:43.378053   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:43.440192   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:43.704689   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:43.878399   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:43.940562   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:44.200615   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:44.376774   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:44.440172   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:44.706702   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:44.877307   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:44.941760   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:45.202727   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:45.382994   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:45.440131   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:45.705152   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:45.877098   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:45.940262   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:46.200443   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:46.381274   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:46.440509   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:46.919874   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:46.920588   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:46.941724   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:47.200740   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:47.376465   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:47.440197   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:47.714759   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:47.876484   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:47.939969   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:48.200391   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:48.376233   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:48.441402   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:48.700838   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:48.881618   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:48.983131   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:49.202699   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:49.376517   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:49.439932   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:49.700367   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:49.877255   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:49.941355   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:50.201394   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:50.377155   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:50.441121   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:50.700091   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:50.876938   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:50.943034   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:51.200599   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:51.375999   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:51.440071   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:51.703446   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:51.876134   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:51.940551   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:52.200521   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:52.377290   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:52.442965   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:52.701581   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:52.877147   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:52.941496   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:53.202107   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:53.377058   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:53.441495   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:53.700654   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:53.876483   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:53.939904   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:54.200722   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:54.376510   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:54.439940   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:54.700736   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:54.877893   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:54.940600   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:55.228561   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:55.378693   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:55.478154   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:55.700422   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:55.876079   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:55.940265   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:56.201187   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:56.377454   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:56.440498   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:56.701069   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:56.877150   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:56.940819   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:57.201103   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:57.377638   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:57.440111   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:57.700880   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:57.877462   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:57.939629   16998 kapi.go:107] duration metric: took 1m19.00321048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0401 19:47:58.200012   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:58.376707   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:58.704229   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:58.877293   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:59.200778   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:59.377011   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:59.701343   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:59.876233   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:00.201015   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:00.377723   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:00.700601   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:00.876724   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:01.200145   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:01.377362   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:01.700420   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:01.875978   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:02.200007   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:02.376882   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:02.700901   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:02.876705   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:03.200531   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:03.376702   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:03.700005   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:03.876696   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:04.200496   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:04.376049   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:04.701109   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:04.877500   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:05.200392   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:05.377962   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:05.700519   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:05.876805   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:06.200276   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:06.377962   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:06.700688   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:06.876846   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:07.200782   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:07.377081   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:07.701100   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:07.878364   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:08.200570   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:08.377015   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:08.700917   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:08.876691   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:09.200267   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:09.379383   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:09.701374   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:09.877229   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:10.201018   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:10.376980   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:10.700696   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:10.876953   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:11.200532   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:11.377299   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:11.701052   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:11.877104   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:12.200439   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:12.376317   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:12.701957   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:12.876639   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:13.200629   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:13.377151   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:13.701242   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:13.877383   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:14.201234   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:14.377295   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:14.701623   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:14.877184   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:15.201138   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:15.378012   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:15.700502   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:15.876689   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:16.200561   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:16.376772   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:16.700756   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:16.877208   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:17.200540   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:17.377265   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:17.700990   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:17.877604   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:18.200053   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:18.377315   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:18.701116   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:18.877200   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:19.200121   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:19.377733   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:19.700916   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:19.877175   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:20.200615   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:20.376372   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:20.701476   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:20.876401   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:21.201393   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:21.376556   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:21.699786   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:21.876512   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:22.201466   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:22.376288   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:22.700708   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:22.877367   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:23.201437   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:23.376953   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:23.700878   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:23.876761   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:24.200568   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:24.376363   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:24.701358   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:24.877734   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:25.200066   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:25.412090   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:25.700982   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:25.876908   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:26.200740   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:26.376626   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:26.699914   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:26.877237   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:27.200647   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:27.376528   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:27.700413   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:27.876754   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:28.200007   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:28.377854   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:28.700028   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:28.877057   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:29.200896   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:29.376542   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:29.700898   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:29.877085   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:30.200989   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:30.377653   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:30.700264   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:30.877345   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:31.201154   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:31.377529   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:31.700934   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:31.877121   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:32.200201   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:32.377559   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:32.700367   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:32.877298   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:33.201422   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:33.376708   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:33.700897   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:33.878075   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:34.200734   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:34.377233   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:34.701325   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:34.877679   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:35.199944   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:35.377506   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:35.700862   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:35.877211   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:36.200784   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:36.376878   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:36.700613   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:36.876556   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:37.200895   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:37.377258   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:37.701114   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:37.877489   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:38.201006   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:38.377595   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:38.700040   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:38.877099   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:39.201433   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:39.376621   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:39.700938   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:39.876999   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:40.200522   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:40.376731   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:40.700526   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:40.877303   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:41.202251   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:41.377115   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:41.701424   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:41.876326   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:42.200588   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:42.376309   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:42.700416   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:42.877150   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:43.201066   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:43.377396   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:43.700843   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:43.876970   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:44.200653   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:44.377011   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:44.700580   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:44.877386   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:45.201864   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:45.377310   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:45.701173   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:45.876961   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:46.200701   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:46.376584   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:46.699946   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:46.876990   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:47.200536   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:47.376588   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:47.700859   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:47.877137   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:48.200711   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:48.377081   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:48.700874   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:48.877272   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:49.201075   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:49.377024   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:49.703464   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:49.877393   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:50.201002   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:50.376865   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:50.701054   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:50.877383   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:51.201549   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:51.376290   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:51.701721   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:51.876818   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:52.199893   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:52.377224   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:52.701415   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:52.876771   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:53.202353   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:53.378635   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:53.700745   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:53.877356   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:54.200951   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:54.377535   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:54.700893   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:54.877675   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:55.201183   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:55.377441   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:55.701344   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:55.877313   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:56.201430   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:56.376525   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:56.712835   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:56.877724   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:57.203039   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:57.377327   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:57.701135   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:57.877502   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:58.200975   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:58.377363   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:58.701567   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:58.876552   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:59.200716   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:59.377515   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:59.998491   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:59.998654   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:49:00.200183   16998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:49:00.377727   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:00.700823   16998 kapi.go:107] duration metric: took 2m23.004130611s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0401 19:49:00.876696   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:01.376855   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:01.877957   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:02.377617   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:02.878504   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:03.378145   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:03.877331   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:04.377330   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:04.877128   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:05.377572   16998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:49:05.877464   16998 kapi.go:107] duration metric: took 2m25.004069236s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0401 19:49:05.879237   16998 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-357468 cluster.
	I0401 19:49:05.880503   16998 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0401 19:49:05.881571   16998 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0401 19:49:05.882750   16998 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0401 19:49:05.883995   16998 addons.go:514] duration metric: took 2m36.8572399s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass cloud-spanner storage-provisioner inspektor-gadget metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0401 19:49:05.884033   16998 start.go:246] waiting for cluster config update ...
	I0401 19:49:05.884059   16998 start.go:255] writing updated cluster config ...
	I0401 19:49:05.884316   16998 ssh_runner.go:195] Run: rm -f paused
	I0401 19:49:05.938628   16998 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 19:49:05.940682   16998 out.go:177] * Done! kubectl is now configured to use "addons-357468" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.297013007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537142296928941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ade6d351-37af-44f4-b8cd-3803a35f33d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.297628239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17efc509-370d-4aa8-b733-5f14ff519af5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.297680275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17efc509-370d-4aa8-b733-5f14ff519af5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.297986279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58566dbd14cddb59bb7987291f1f06ee1f46e1f889a90724420b8ad51e07f2fc,PodSandboxId:164a9a575bb9d3f2f3f1b6a2d58c837fd436ea2412bfc3655f1efd02d0d45538,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743537003792735729,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87f5b89f-50aa-4374-b4de-f14b987a8435,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de211d1fc70928258b1dd67c41f6f8ec03cc83e99146405ff93e832d0d04c922,PodSandboxId:74859d6cde350e0946fff9e7b8d56c052b7cb88b20d71dc65fe76e6d6c554eb5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743536950830380539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d351faea-3a0e-4224-9e5f-f278ee6d59a9,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb68a046c185ca17291172b4488ece3f3afaab2d57902ac832423f40242a7d8,PodSandboxId:47f50bf89d592f3534fb340b5d0026800251d96ad9ce2caceb354b0b9814380e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743536940183826122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-c9p9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97e1784a-9374-4066-a5ef-47970fbf9aca,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fda6efe11911ab80ef6a49da3e5de3d349cb39030be4fe5d4f794d13b73839f,PodSandboxId:442db462e579a8314ecf62614fb47ee6432aec4da52b3501befc97873b9a4215,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861346005948,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-klv5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 71787d54-f12b-44db-929c-cfb2c6edaea6,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841af2a2cff40e6d54299579db7e96b53c6a5e6a4ff6ea2c331249083f91331,PodSandboxId:fbea6c478d4ff4699b2e48dfd59ceaf69d1ddf1bb3b3d4b3faf07e3fc190952f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861229162939,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-swd62,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b6aec7c0-8bca-4e5a-8921-5dfdaa1bd536,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b1606fd77a632a3f0286e63ab2225f6f78f5e3e378f2eeca5b76c4213c581e,PodSandboxId:f779673148015f3283b6b9ca3972dd90ebacd1707164bc85751d00f76c7ee0b3,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743536831984105063,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bcc9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23b66cf-45c7-4934-893d-bb20072b09e3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292684c22b09f3af493b27c7c808835e14493c1b2fd5e2a92602202cb7d136fe,PodSandboxId:93490d4249b64bf9153f4a70d3007fb913a1490f41a34c4be1fd4d384db1fe09,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743536806754054215,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf917b0-6ad6-4a63-ad55-3d3e9fbc5a29,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d7db7ddc876169018948c401ad97c5240fd3320140c90beb784b8a051a4367,PodSandboxId:48f36e0ff1922f0004b63cac4216a6e1244c11d4affe706b56d565e5dd5e940b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743536797259309239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf1e6c5-ba2e-4657-bee2-12904cbe2cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafeecdf61a5b9e86919f7b7d4d1a083fdaa99af012d05c9db11ba28edc53c2c,PodSandboxId:c6a04c376b4707532583bd01b737c4165eb56c80e4349a780d3eb7935eb9e2cb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743536793325238350,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cfh7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e19243-d755-429f-adf4-cbba1763647f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:64b4c0edf62b7d5efaf6e9bef6185cc9bce9dc33699854015a37b00d4a7b3307,PodSandboxId:422005c4230d23e3fe9a58b9beeb7dccee5ee43fb06b855c567b6ba5cede4d7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743536789964592242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rm6gh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a55728a-36d5-4139-aee2-c3c8d64a56b6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeb71283df145c10cc5bbccdd754
f954f7b98430fb0daf8b51e41cc7120b7a7,PodSandboxId:6ab934cb2d64d956489647ebfdad887ee316478780fb7852d0805ea1c9cd82cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743536778847266876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb3555dbbebd54d405df20dd7b644c6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ea6472bd436e7105ce81b38191c1f65e6a48f2330bf
ee556e65ed0f8a9a54,PodSandboxId:d4c8f7a80344aadc3fafe2a3056c1390b548b2cc780ac20893445578c7f8ac6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743536778884036550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4be3169b641c9360ef9146c83e9c32,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d69f9cbd5df7d8aa3d5c80d6dcf011d2e407
e4ca3f088cb7b383e3c9e40d12f3,PodSandboxId:a87a45bd9622cb7ab42be05687511131540c260a5a65cc0fa4669bbe0d143193,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743536778809485383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1aafdb6b33142c5783ecab99522fb7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37045489d732e16ab2b48596059378f45092b7f684552335bb9ebb30bf262266,PodSandboxId:5a358
49837e2bac54c19e70f086ac0042f84102a1876a743c934d82c5b00687e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743536778828813346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d534b830da362d60e580db68e8481,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17efc509-370d-4aa8-b733-5f14ff519af5 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.335999011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eaf73761-166f-4e84-b715-fcdbeacf0be3 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.336067725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eaf73761-166f-4e84-b715-fcdbeacf0be3 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.337162574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d4e4c5d-6a27-437b-9af2-66ef0d782583 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.338459046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537142338382733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d4e4c5d-6a27-437b-9af2-66ef0d782583 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.339135765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bebf1c1-1b01-42f7-afef-ba02a38897fd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.339189822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bebf1c1-1b01-42f7-afef-ba02a38897fd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.339590045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58566dbd14cddb59bb7987291f1f06ee1f46e1f889a90724420b8ad51e07f2fc,PodSandboxId:164a9a575bb9d3f2f3f1b6a2d58c837fd436ea2412bfc3655f1efd02d0d45538,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743537003792735729,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87f5b89f-50aa-4374-b4de-f14b987a8435,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de211d1fc70928258b1dd67c41f6f8ec03cc83e99146405ff93e832d0d04c922,PodSandboxId:74859d6cde350e0946fff9e7b8d56c052b7cb88b20d71dc65fe76e6d6c554eb5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743536950830380539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d351faea-3a0e-4224-9e5f-f278ee6d59a9,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb68a046c185ca17291172b4488ece3f3afaab2d57902ac832423f40242a7d8,PodSandboxId:47f50bf89d592f3534fb340b5d0026800251d96ad9ce2caceb354b0b9814380e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743536940183826122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-c9p9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97e1784a-9374-4066-a5ef-47970fbf9aca,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fda6efe11911ab80ef6a49da3e5de3d349cb39030be4fe5d4f794d13b73839f,PodSandboxId:442db462e579a8314ecf62614fb47ee6432aec4da52b3501befc97873b9a4215,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861346005948,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-klv5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 71787d54-f12b-44db-929c-cfb2c6edaea6,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841af2a2cff40e6d54299579db7e96b53c6a5e6a4ff6ea2c331249083f91331,PodSandboxId:fbea6c478d4ff4699b2e48dfd59ceaf69d1ddf1bb3b3d4b3faf07e3fc190952f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861229162939,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-swd62,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b6aec7c0-8bca-4e5a-8921-5dfdaa1bd536,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b1606fd77a632a3f0286e63ab2225f6f78f5e3e378f2eeca5b76c4213c581e,PodSandboxId:f779673148015f3283b6b9ca3972dd90ebacd1707164bc85751d00f76c7ee0b3,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743536831984105063,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bcc9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23b66cf-45c7-4934-893d-bb20072b09e3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292684c22b09f3af493b27c7c808835e14493c1b2fd5e2a92602202cb7d136fe,PodSandboxId:93490d4249b64bf9153f4a70d3007fb913a1490f41a34c4be1fd4d384db1fe09,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743536806754054215,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf917b0-6ad6-4a63-ad55-3d3e9fbc5a29,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d7db7ddc876169018948c401ad97c5240fd3320140c90beb784b8a051a4367,PodSandboxId:48f36e0ff1922f0004b63cac4216a6e1244c11d4affe706b56d565e5dd5e940b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743536797259309239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf1e6c5-ba2e-4657-bee2-12904cbe2cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafeecdf61a5b9e86919f7b7d4d1a083fdaa99af012d05c9db11ba28edc53c2c,PodSandboxId:c6a04c376b4707532583bd01b737c4165eb56c80e4349a780d3eb7935eb9e2cb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743536793325238350,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cfh7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e19243-d755-429f-adf4-cbba1763647f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:64b4c0edf62b7d5efaf6e9bef6185cc9bce9dc33699854015a37b00d4a7b3307,PodSandboxId:422005c4230d23e3fe9a58b9beeb7dccee5ee43fb06b855c567b6ba5cede4d7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743536789964592242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rm6gh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a55728a-36d5-4139-aee2-c3c8d64a56b6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeb71283df145c10cc5bbccdd754
f954f7b98430fb0daf8b51e41cc7120b7a7,PodSandboxId:6ab934cb2d64d956489647ebfdad887ee316478780fb7852d0805ea1c9cd82cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743536778847266876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb3555dbbebd54d405df20dd7b644c6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ea6472bd436e7105ce81b38191c1f65e6a48f2330bf
ee556e65ed0f8a9a54,PodSandboxId:d4c8f7a80344aadc3fafe2a3056c1390b548b2cc780ac20893445578c7f8ac6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743536778884036550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4be3169b641c9360ef9146c83e9c32,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d69f9cbd5df7d8aa3d5c80d6dcf011d2e407
e4ca3f088cb7b383e3c9e40d12f3,PodSandboxId:a87a45bd9622cb7ab42be05687511131540c260a5a65cc0fa4669bbe0d143193,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743536778809485383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1aafdb6b33142c5783ecab99522fb7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37045489d732e16ab2b48596059378f45092b7f684552335bb9ebb30bf262266,PodSandboxId:5a358
49837e2bac54c19e70f086ac0042f84102a1876a743c934d82c5b00687e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743536778828813346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d534b830da362d60e580db68e8481,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bebf1c1-1b01-42f7-afef-ba02a38897fd name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.376829078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd3b2438-6b80-4970-8a39-e11156316e8d name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.376904525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd3b2438-6b80-4970-8a39-e11156316e8d name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.378215336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64a98fda-3ab9-4981-87eb-c80eae621be5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.379522954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537142379497932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64a98fda-3ab9-4981-87eb-c80eae621be5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.380153858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bbc4917-c403-4662-93d2-de0a405b5098 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.380213666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bbc4917-c403-4662-93d2-de0a405b5098 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.380553393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58566dbd14cddb59bb7987291f1f06ee1f46e1f889a90724420b8ad51e07f2fc,PodSandboxId:164a9a575bb9d3f2f3f1b6a2d58c837fd436ea2412bfc3655f1efd02d0d45538,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743537003792735729,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87f5b89f-50aa-4374-b4de-f14b987a8435,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de211d1fc70928258b1dd67c41f6f8ec03cc83e99146405ff93e832d0d04c922,PodSandboxId:74859d6cde350e0946fff9e7b8d56c052b7cb88b20d71dc65fe76e6d6c554eb5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743536950830380539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d351faea-3a0e-4224-9e5f-f278ee6d59a9,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb68a046c185ca17291172b4488ece3f3afaab2d57902ac832423f40242a7d8,PodSandboxId:47f50bf89d592f3534fb340b5d0026800251d96ad9ce2caceb354b0b9814380e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743536940183826122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-c9p9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97e1784a-9374-4066-a5ef-47970fbf9aca,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fda6efe11911ab80ef6a49da3e5de3d349cb39030be4fe5d4f794d13b73839f,PodSandboxId:442db462e579a8314ecf62614fb47ee6432aec4da52b3501befc97873b9a4215,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861346005948,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-klv5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 71787d54-f12b-44db-929c-cfb2c6edaea6,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841af2a2cff40e6d54299579db7e96b53c6a5e6a4ff6ea2c331249083f91331,PodSandboxId:fbea6c478d4ff4699b2e48dfd59ceaf69d1ddf1bb3b3d4b3faf07e3fc190952f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861229162939,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-swd62,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b6aec7c0-8bca-4e5a-8921-5dfdaa1bd536,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b1606fd77a632a3f0286e63ab2225f6f78f5e3e378f2eeca5b76c4213c581e,PodSandboxId:f779673148015f3283b6b9ca3972dd90ebacd1707164bc85751d00f76c7ee0b3,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743536831984105063,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bcc9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23b66cf-45c7-4934-893d-bb20072b09e3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292684c22b09f3af493b27c7c808835e14493c1b2fd5e2a92602202cb7d136fe,PodSandboxId:93490d4249b64bf9153f4a70d3007fb913a1490f41a34c4be1fd4d384db1fe09,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743536806754054215,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf917b0-6ad6-4a63-ad55-3d3e9fbc5a29,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d7db7ddc876169018948c401ad97c5240fd3320140c90beb784b8a051a4367,PodSandboxId:48f36e0ff1922f0004b63cac4216a6e1244c11d4affe706b56d565e5dd5e940b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743536797259309239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf1e6c5-ba2e-4657-bee2-12904cbe2cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafeecdf61a5b9e86919f7b7d4d1a083fdaa99af012d05c9db11ba28edc53c2c,PodSandboxId:c6a04c376b4707532583bd01b737c4165eb56c80e4349a780d3eb7935eb9e2cb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743536793325238350,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cfh7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e19243-d755-429f-adf4-cbba1763647f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:64b4c0edf62b7d5efaf6e9bef6185cc9bce9dc33699854015a37b00d4a7b3307,PodSandboxId:422005c4230d23e3fe9a58b9beeb7dccee5ee43fb06b855c567b6ba5cede4d7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743536789964592242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rm6gh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a55728a-36d5-4139-aee2-c3c8d64a56b6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeb71283df145c10cc5bbccdd754
f954f7b98430fb0daf8b51e41cc7120b7a7,PodSandboxId:6ab934cb2d64d956489647ebfdad887ee316478780fb7852d0805ea1c9cd82cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743536778847266876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb3555dbbebd54d405df20dd7b644c6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ea6472bd436e7105ce81b38191c1f65e6a48f2330bf
ee556e65ed0f8a9a54,PodSandboxId:d4c8f7a80344aadc3fafe2a3056c1390b548b2cc780ac20893445578c7f8ac6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743536778884036550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4be3169b641c9360ef9146c83e9c32,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d69f9cbd5df7d8aa3d5c80d6dcf011d2e407
e4ca3f088cb7b383e3c9e40d12f3,PodSandboxId:a87a45bd9622cb7ab42be05687511131540c260a5a65cc0fa4669bbe0d143193,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743536778809485383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1aafdb6b33142c5783ecab99522fb7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37045489d732e16ab2b48596059378f45092b7f684552335bb9ebb30bf262266,PodSandboxId:5a358
49837e2bac54c19e70f086ac0042f84102a1876a743c934d82c5b00687e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743536778828813346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d534b830da362d60e580db68e8481,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bbc4917-c403-4662-93d2-de0a405b5098 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.415598655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fab9883e-940d-4100-b400-51066a9dea6b name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.415672991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fab9883e-940d-4100-b400-51066a9dea6b name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.416799721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f26dbd0-8711-4a0c-840c-a5ba404e5f2e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.419726977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537142419694388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f26dbd0-8711-4a0c-840c-a5ba404e5f2e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.420357932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d6071fa-8b72-4f02-830f-29eeec05ead8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.420520535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d6071fa-8b72-4f02-830f-29eeec05ead8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:22 addons-357468 crio[665]: time="2025-04-01 19:52:22.420876395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58566dbd14cddb59bb7987291f1f06ee1f46e1f889a90724420b8ad51e07f2fc,PodSandboxId:164a9a575bb9d3f2f3f1b6a2d58c837fd436ea2412bfc3655f1efd02d0d45538,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1743537003792735729,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 87f5b89f-50aa-4374-b4de-f14b987a8435,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de211d1fc70928258b1dd67c41f6f8ec03cc83e99146405ff93e832d0d04c922,PodSandboxId:74859d6cde350e0946fff9e7b8d56c052b7cb88b20d71dc65fe76e6d6c554eb5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1743536950830380539,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d351faea-3a0e-4224-9e5f-f278ee6d59a9,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb68a046c185ca17291172b4488ece3f3afaab2d57902ac832423f40242a7d8,PodSandboxId:47f50bf89d592f3534fb340b5d0026800251d96ad9ce2caceb354b0b9814380e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1743536940183826122,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-c9p9b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97e1784a-9374-4066-a5ef-47970fbf9aca,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8fda6efe11911ab80ef6a49da3e5de3d349cb39030be4fe5d4f794d13b73839f,PodSandboxId:442db462e579a8314ecf62614fb47ee6432aec4da52b3501befc97873b9a4215,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861346005948,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-klv5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 71787d54-f12b-44db-929c-cfb2c6edaea6,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841af2a2cff40e6d54299579db7e96b53c6a5e6a4ff6ea2c331249083f91331,PodSandboxId:fbea6c478d4ff4699b2e48dfd59ceaf69d1ddf1bb3b3d4b3faf07e3fc190952f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1743536861229162939,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-swd62,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b6aec7c0-8bca-4e5a-8921-5dfdaa1bd536,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2b1606fd77a632a3f0286e63ab2225f6f78f5e3e378f2eeca5b76c4213c581e,PodSandboxId:f779673148015f3283b6b9ca3972dd90ebacd1707164bc85751d00f76c7ee0b3,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1743536831984105063,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bcc9r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23b66cf-45c7-4934-893d-bb20072b09e3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292684c22b09f3af493b27c7c808835e14493c1b2fd5e2a92602202cb7d136fe,PodSandboxId:93490d4249b64bf9153f4a70d3007fb913a1490f41a34c4be1fd4d384db1fe09,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1743536806754054215,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cf917b0-6ad6-4a63-ad55-3d3e9fbc5a29,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d7db7ddc876169018948c401ad97c5240fd3320140c90beb784b8a051a4367,PodSandboxId:48f36e0ff1922f0004b63cac4216a6e1244c11d4affe706b56d565e5dd5e940b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743536797259309239,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf1e6c5-ba2e-4657-bee2-12904cbe2cb7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafeecdf61a5b9e86919f7b7d4d1a083fdaa99af012d05c9db11ba28edc53c2c,PodSandboxId:c6a04c376b4707532583bd01b737c4165eb56c80e4349a780d3eb7935eb9e2cb,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743536793325238350,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-cfh7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e19243-d755-429f-adf4-cbba1763647f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:64b4c0edf62b7d5efaf6e9bef6185cc9bce9dc33699854015a37b00d4a7b3307,PodSandboxId:422005c4230d23e3fe9a58b9beeb7dccee5ee43fb06b855c567b6ba5cede4d7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743536789964592242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rm6gh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a55728a-36d5-4139-aee2-c3c8d64a56b6,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeb71283df145c10cc5bbccdd754
f954f7b98430fb0daf8b51e41cc7120b7a7,PodSandboxId:6ab934cb2d64d956489647ebfdad887ee316478780fb7852d0805ea1c9cd82cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743536778847266876,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb3555dbbebd54d405df20dd7b644c6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ea6472bd436e7105ce81b38191c1f65e6a48f2330bf
ee556e65ed0f8a9a54,PodSandboxId:d4c8f7a80344aadc3fafe2a3056c1390b548b2cc780ac20893445578c7f8ac6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743536778884036550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a4be3169b641c9360ef9146c83e9c32,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d69f9cbd5df7d8aa3d5c80d6dcf011d2e407
e4ca3f088cb7b383e3c9e40d12f3,PodSandboxId:a87a45bd9622cb7ab42be05687511131540c260a5a65cc0fa4669bbe0d143193,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743536778809485383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1aafdb6b33142c5783ecab99522fb7,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37045489d732e16ab2b48596059378f45092b7f684552335bb9ebb30bf262266,PodSandboxId:5a358
49837e2bac54c19e70f086ac0042f84102a1876a743c934d82c5b00687e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743536778828813346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-357468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7d534b830da362d60e580db68e8481,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d6071fa-8b72-4f02-830f-29eeec05ead8 name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	58566dbd14cdd       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   164a9a575bb9d       nginx
	de211d1fc7092       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   74859d6cde350       busybox
	3fb68a046c185       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   47f50bf89d592       ingress-nginx-controller-56d7c84fd4-c9p9b
	8fda6efe11911       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   442db462e579a       ingress-nginx-admission-patch-klv5s
	f841af2a2cff4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   fbea6c478d4ff       ingress-nginx-admission-create-swd62
	f2b1606fd77a6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   f779673148015       amd-gpu-device-plugin-bcc9r
	292684c22b09f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   93490d4249b64       kube-ingress-dns-minikube
	05d7db7ddc876       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   48f36e0ff1922       storage-provisioner
	aafeecdf61a5b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   c6a04c376b470       coredns-668d6bf9bc-cfh7q
	64b4c0edf62b7       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             5 minutes ago       Running             kube-proxy                0                   422005c4230d2       kube-proxy-rm6gh
	686ea6472bd43       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             6 minutes ago       Running             kube-controller-manager   0                   d4c8f7a80344a       kube-controller-manager-addons-357468
	fdeb71283df14       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             6 minutes ago       Running             kube-scheduler            0                   6ab934cb2d64d       kube-scheduler-addons-357468
	37045489d732e       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             6 minutes ago       Running             kube-apiserver            0                   5a35849837e2b       kube-apiserver-addons-357468
	d69f9cbd5df7d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             6 minutes ago       Running             etcd                      0                   a87a45bd9622c       etcd-addons-357468
	
	
	==> coredns [aafeecdf61a5b9e86919f7b7d4d1a083fdaa99af012d05c9db11ba28edc53c2c] <==
	[INFO] 10.244.0.8:53441 - 59699 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000093019s
	[INFO] 10.244.0.8:53441 - 3623 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000121448s
	[INFO] 10.244.0.8:53441 - 34790 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000438588s
	[INFO] 10.244.0.8:53441 - 47805 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000086301s
	[INFO] 10.244.0.8:53441 - 12314 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000298034s
	[INFO] 10.244.0.8:53441 - 29616 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000120386s
	[INFO] 10.244.0.8:53441 - 64529 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000120366s
	[INFO] 10.244.0.8:47820 - 20200 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130833s
	[INFO] 10.244.0.8:47820 - 19922 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000298207s
	[INFO] 10.244.0.8:32990 - 48897 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092946s
	[INFO] 10.244.0.8:32990 - 48638 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000255731s
	[INFO] 10.244.0.8:37324 - 37419 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095342s
	[INFO] 10.244.0.8:37324 - 37126 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00029383s
	[INFO] 10.244.0.8:36084 - 17070 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135208s
	[INFO] 10.244.0.8:36084 - 16885 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106739s
	[INFO] 10.244.0.23:56379 - 65466 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000382365s
	[INFO] 10.244.0.23:42873 - 3949 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000146293s
	[INFO] 10.244.0.23:53236 - 34730 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115165s
	[INFO] 10.244.0.23:41711 - 57613 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000063843s
	[INFO] 10.244.0.23:33474 - 11687 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108385s
	[INFO] 10.244.0.23:53713 - 24100 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114193s
	[INFO] 10.244.0.23:36441 - 6458 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00398178s
	[INFO] 10.244.0.23:37597 - 15643 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.004356915s
	[INFO] 10.244.0.26:42549 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000503598s
	[INFO] 10.244.0.26:50947 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124063s
	
	
	==> describe nodes <==
	Name:               addons-357468
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-357468
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=addons-357468
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T19_46_24_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-357468
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 19:46:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-357468
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 19:52:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 19:50:29 +0000   Tue, 01 Apr 2025 19:46:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 19:50:29 +0000   Tue, 01 Apr 2025 19:46:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 19:50:29 +0000   Tue, 01 Apr 2025 19:46:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Apr 2025 19:50:29 +0000   Tue, 01 Apr 2025 19:46:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    addons-357468
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 24808dfd85a74043982a99e7609f5ee6
	  System UUID:                24808dfd-85a7-4043-982a-99e7609f5ee6
	  Boot ID:                    5b4c7890-de3a-4fd3-ba6b-54832e6404fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  default                     hello-world-app-7d9564db4-9qn8z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-c9p9b    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m45s
	  kube-system                 amd-gpu-device-plugin-bcc9r                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 coredns-668d6bf9bc-cfh7q                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m53s
	  kube-system                 etcd-addons-357468                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m58s
	  kube-system                 kube-apiserver-addons-357468                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-controller-manager-addons-357468        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-proxy-rm6gh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 kube-scheduler-addons-357468                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m51s  kube-proxy       
	  Normal  Starting                 5m59s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m58s  kubelet          Node addons-357468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s  kubelet          Node addons-357468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s  kubelet          Node addons-357468 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m57s  kubelet          Node addons-357468 status is now: NodeReady
	  Normal  RegisteredNode           5m55s  node-controller  Node addons-357468 event: Registered Node addons-357468 in Controller
	
	
	==> dmesg <==
	[  +0.110044] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.201531] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.029746] kauditd_printk_skb: 152 callbacks suppressed
	[  +5.374886] kauditd_printk_skb: 59 callbacks suppressed
	[Apr 1 19:47] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.717086] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.618793] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.800237] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.002779] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.146640] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.860013] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.378697] kauditd_printk_skb: 23 callbacks suppressed
	[Apr 1 19:48] kauditd_printk_skb: 3 callbacks suppressed
	[Apr 1 19:49] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.185483] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.535505] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.959260] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.655428] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.098390] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.121943] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.012478] kauditd_printk_skb: 36 callbacks suppressed
	[Apr 1 19:50] kauditd_printk_skb: 19 callbacks suppressed
	[ +14.830167] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.899305] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.381320] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [d69f9cbd5df7d8aa3d5c80d6dcf011d2e407e4ca3f088cb7b383e3c9e40d12f3] <==
	{"level":"info","ts":"2025-04-01T19:49:27.419573Z","caller":"traceutil/trace.go:171","msg":"trace[1876937039] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1327; }","duration":"105.495314ms","start":"2025-04-01T19:49:27.314063Z","end":"2025-04-01T19:49:27.419559Z","steps":["trace[1876937039] 'process raft request'  (duration: 92.737745ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:49:36.439574Z","caller":"traceutil/trace.go:171","msg":"trace[399408126] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"378.875773ms","start":"2025-04-01T19:49:36.060683Z","end":"2025-04-01T19:49:36.439559Z","steps":["trace[399408126] 'process raft request'  (duration: 378.715507ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:49:36.439702Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-01T19:49:36.060645Z","time spent":"378.997689ms","remote":"127.0.0.1:51372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4553,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-7fbb699795-5gb2p\" mod_revision:1400 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-7fbb699795-5gb2p\" value_size:4487 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-7fbb699795-5gb2p\" > >"}
	{"level":"info","ts":"2025-04-01T19:49:36.439827Z","caller":"traceutil/trace.go:171","msg":"trace[127517996] linearizableReadLoop","detail":"{readStateIndex:1461; appliedIndex:1461; }","duration":"324.108832ms","start":"2025-04-01T19:49:36.115703Z","end":"2025-04-01T19:49:36.439812Z","steps":["trace[127517996] 'read index received'  (duration: 324.101092ms)","trace[127517996] 'applied index is now lower than readState.Index'  (duration: 6.628µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-01T19:49:36.440054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.338449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-01T19:49:36.440178Z","caller":"traceutil/trace.go:171","msg":"trace[915055605] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1402; }","duration":"324.486418ms","start":"2025-04-01T19:49:36.115680Z","end":"2025-04-01T19:49:36.440166Z","steps":["trace[915055605] 'agreement among raft nodes before linearized reading'  (duration: 324.281961ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:49:36.440378Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-01T19:49:36.115660Z","time spent":"324.707839ms","remote":"127.0.0.1:51164","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-04-01T19:49:36.446260Z","caller":"traceutil/trace.go:171","msg":"trace[563621073] transaction","detail":"{read_only:false; response_revision:1403; number_of_response:1; }","duration":"130.557096ms","start":"2025-04-01T19:49:36.315689Z","end":"2025-04-01T19:49:36.446246Z","steps":["trace[563621073] 'process raft request'  (duration: 129.128989ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:49:36.446975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.981067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-04-01T19:49:36.447991Z","caller":"traceutil/trace.go:171","msg":"trace[2045988609] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1403; }","duration":"280.766088ms","start":"2025-04-01T19:49:36.166954Z","end":"2025-04-01T19:49:36.447721Z","steps":["trace[2045988609] 'agreement among raft nodes before linearized reading'  (duration: 278.935556ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:49:53.838795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.709345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/cloud-spanner-emulator-cc9755fc7-xftcs\" limit:1 ","response":"range_response_count:1 size:3511"}
	{"level":"info","ts":"2025-04-01T19:49:53.838843Z","caller":"traceutil/trace.go:171","msg":"trace[1894366632] range","detail":"{range_begin:/registry/pods/default/cloud-spanner-emulator-cc9755fc7-xftcs; range_end:; response_count:1; response_revision:1587; }","duration":"165.789518ms","start":"2025-04-01T19:49:53.673041Z","end":"2025-04-01T19:49:53.838831Z","steps":["trace[1894366632] 'range keys from in-memory index tree'  (duration: 165.501266ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:49:53.838946Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.539123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-01T19:49:53.838982Z","caller":"traceutil/trace.go:171","msg":"trace[1008702752] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1587; }","duration":"219.598208ms","start":"2025-04-01T19:49:53.619373Z","end":"2025-04-01T19:49:53.838971Z","steps":["trace[1008702752] 'range keys from in-memory index tree'  (duration: 219.394222ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:50:02.435308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.971254ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-01T19:50:02.435381Z","caller":"traceutil/trace.go:171","msg":"trace[1312093401] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1649; }","duration":"375.064061ms","start":"2025-04-01T19:50:02.060305Z","end":"2025-04-01T19:50:02.435369Z","steps":["trace[1312093401] 'range keys from in-memory index tree'  (duration: 374.950947ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:50:02.440380Z","caller":"traceutil/trace.go:171","msg":"trace[803909234] linearizableReadLoop","detail":"{readStateIndex:1719; appliedIndex:1718; }","duration":"327.248464ms","start":"2025-04-01T19:50:02.113118Z","end":"2025-04-01T19:50:02.440366Z","steps":["trace[803909234] 'read index received'  (duration: 327.119309ms)","trace[803909234] 'applied index is now lower than readState.Index'  (duration: 128.815µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T19:50:02.440731Z","caller":"traceutil/trace.go:171","msg":"trace[37837617] transaction","detail":"{read_only:false; response_revision:1650; number_of_response:1; }","duration":"387.389783ms","start":"2025-04-01T19:50:02.053325Z","end":"2025-04-01T19:50:02.440715Z","steps":["trace[37837617] 'process raft request'  (duration: 386.954044ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:50:02.440803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-01T19:50:02.053307Z","time spent":"387.44547ms","remote":"127.0.0.1:36750","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1342,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/snapcontent-5d4544ab-cc29-4070-80d4-8cd914d68587\" mod_revision:0 > success:<request_put:<key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents/snapcontent-5d4544ab-cc29-4070-80d4-8cd914d68587\" value_size:1229 >> failure:<>"}
	{"level":"warn","ts":"2025-04-01T19:50:02.440955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.832739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 ","response":"range_response_count:1 size:635"}
	{"level":"info","ts":"2025-04-01T19:50:02.440975Z","caller":"traceutil/trace.go:171","msg":"trace[1338793133] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:1; response_revision:1650; }","duration":"327.874344ms","start":"2025-04-01T19:50:02.113094Z","end":"2025-04-01T19:50:02.440969Z","steps":["trace[1338793133] 'agreement among raft nodes before linearized reading'  (duration: 327.805306ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:50:02.440992Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-01T19:50:02.113080Z","time spent":"327.907984ms","remote":"127.0.0.1:51390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":659,"request content":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 "}
	{"level":"warn","ts":"2025-04-01T19:50:02.441127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.621671ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-01T19:50:02.441184Z","caller":"traceutil/trace.go:171","msg":"trace[695906322] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1650; }","duration":"316.701423ms","start":"2025-04-01T19:50:02.124475Z","end":"2025-04-01T19:50:02.441176Z","steps":["trace[695906322] 'agreement among raft nodes before linearized reading'  (duration: 316.626551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:50:02.441206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-01T19:50:02.124407Z","time spent":"316.793714ms","remote":"127.0.0.1:51164","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 19:52:22 up 6 min,  0 users,  load average: 0.24, 0.91, 0.55
	Linux addons-357468 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [37045489d732e16ab2b48596059378f45092b7f684552335bb9ebb30bf262266] <==
	E0401 19:47:16.315637       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0401 19:49:18.764682       1 conn.go:339] Error on socket receive: read tcp 192.168.39.65:8443->192.168.39.1:49466: use of closed network connection
	E0401 19:49:18.946045       1 conn.go:339] Error on socket receive: read tcp 192.168.39.65:8443->192.168.39.1:49486: use of closed network connection
	I0401 19:49:28.390026       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.158.179"}
	I0401 19:49:52.404957       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0401 19:49:52.612654       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.56.3"}
	I0401 19:49:55.997350       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0401 19:49:57.040534       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0401 19:50:01.875820       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0401 19:50:17.283180       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0401 19:50:18.039123       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0401 19:50:25.862842       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:50:25.862975       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:50:25.894530       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:50:25.894688       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:50:25.919503       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:50:25.919725       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:50:25.946210       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:50:25.946515       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:50:26.001273       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:50:26.003070       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0401 19:50:26.947045       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0401 19:50:27.002574       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0401 19:50:27.078805       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0401 19:52:21.204124       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.93.204"}
	
	
	==> kube-controller-manager [686ea6472bd436e7105ce81b38191c1f65e6a48f2330bfee556e65ed0f8a9a54] <==
	E0401 19:51:09.559276       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:51:33.627185       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:51:33.628541       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0401 19:51:33.629328       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:51:33.629479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:51:42.325722       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:51:42.327003       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0401 19:51:42.327897       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:51:42.327950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:51:49.763257       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:51:49.764383       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0401 19:51:49.765373       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:51:49.765490       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:51:56.878558       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:51:56.879894       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0401 19:51:56.880870       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:51:56.880935       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:52:20.030945       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:52:20.031990       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0401 19:52:20.032781       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:52:20.032828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0401 19:52:21.021146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="30.012703ms"
	I0401 19:52:21.034382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.188391ms"
	I0401 19:52:21.034524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.968µs"
	I0401 19:52:21.050562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="37.111µs"
	
	
	==> kube-proxy [64b4c0edf62b7d5efaf6e9bef6185cc9bce9dc33699854015a37b00d4a7b3307] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0401 19:46:30.976853       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0401 19:46:31.014063       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	E0401 19:46:31.014186       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 19:46:31.224189       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0401 19:46:31.228594       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:46:31.228670       1 server_linux.go:170] "Using iptables Proxier"
	I0401 19:46:31.264891       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 19:46:31.265163       1 server.go:497] "Version info" version="v1.32.2"
	I0401 19:46:31.265175       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:46:31.272975       1 config.go:199] "Starting service config controller"
	I0401 19:46:31.281539       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 19:46:31.281631       1 config.go:105] "Starting endpoint slice config controller"
	I0401 19:46:31.281637       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 19:46:31.282161       1 config.go:329] "Starting node config controller"
	I0401 19:46:31.282168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 19:46:31.383567       1 shared_informer.go:320] Caches are synced for node config
	I0401 19:46:31.383649       1 shared_informer.go:320] Caches are synced for service config
	I0401 19:46:31.383659       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fdeb71283df145c10cc5bbccdd754f954f7b98430fb0daf8b51e41cc7120b7a7] <==
	W0401 19:46:21.452367       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 19:46:21.452391       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:21.452467       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 19:46:21.452495       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:21.452610       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:46:21.452648       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:21.452789       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 19:46:21.452824       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 19:46:22.410092       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:46:22.410146       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:22.450760       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0401 19:46:22.450941       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:22.451987       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 19:46:22.452055       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:22.458164       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 19:46:22.458228       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:22.545872       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 19:46:22.545920       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:22.592742       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 19:46:22.592835       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:22.606032       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 19:46:22.606118       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:22.611547       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:46:22.611687       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0401 19:46:23.046042       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:51:34 addons-357468 kubelet[1230]: E0401 19:51:34.590615    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537094590244131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:34 addons-357468 kubelet[1230]: E0401 19:51:34.590654    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537094590244131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:42 addons-357468 kubelet[1230]: I0401 19:51:42.087389    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 01 19:51:44 addons-357468 kubelet[1230]: E0401 19:51:44.593137    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537104592651831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:44 addons-357468 kubelet[1230]: E0401 19:51:44.593626    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537104592651831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:54 addons-357468 kubelet[1230]: E0401 19:51:54.595998    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537114595693482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:54 addons-357468 kubelet[1230]: E0401 19:51:54.596072    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537114595693482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:52:04 addons-357468 kubelet[1230]: E0401 19:52:04.598999    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537124598558621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:52:04 addons-357468 kubelet[1230]: E0401 19:52:04.599498    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537124598558621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:52:14 addons-357468 kubelet[1230]: E0401 19:52:14.602083    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537134601751206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:52:14 addons-357468 kubelet[1230]: E0401 19:52:14.602126    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537134601751206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031050    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="1d309d5f-aa0a-4c35-a70e-273f297e4e07" containerName="csi-attacher"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031097    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="69f8a0e9-05fc-480c-971e-8b11cf4fec68" containerName="volume-snapshot-controller"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031104    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="af407583-e9cd-4c5c-93b2-29c17c4da3d6" containerName="hostpath"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031109    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="bb2978a1-f3d5-475c-a8bb-79f50b5bf665" containerName="local-path-provisioner"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031115    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="3cb6698d-08a0-410c-9d69-0760d8256362" containerName="volume-snapshot-controller"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031119    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="3ff0c75b-f68d-4d93-82fe-027230ee0090" containerName="csi-resizer"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031123    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="af407583-e9cd-4c5c-93b2-29c17c4da3d6" containerName="node-driver-registrar"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031128    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="af407583-e9cd-4c5c-93b2-29c17c4da3d6" containerName="csi-snapshotter"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031132    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="af407583-e9cd-4c5c-93b2-29c17c4da3d6" containerName="liveness-probe"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031137    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="af407583-e9cd-4c5c-93b2-29c17c4da3d6" containerName="csi-provisioner"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031141    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="0d4da170-5d2d-411b-8f98-496e54109da9" containerName="task-pv-container"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.031146    1230 memory_manager.go:355] "RemoveStaleState removing state" podUID="af407583-e9cd-4c5c-93b2-29c17c4da3d6" containerName="csi-external-health-monitor-controller"
	Apr 01 19:52:21 addons-357468 kubelet[1230]: I0401 19:52:21.176518    1230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fr7\" (UniqueName: \"kubernetes.io/projected/465d0ba8-b2df-46cb-be17-2a98098f865d-kube-api-access-x8fr7\") pod \"hello-world-app-7d9564db4-9qn8z\" (UID: \"465d0ba8-b2df-46cb-be17-2a98098f865d\") " pod="default/hello-world-app-7d9564db4-9qn8z"
	Apr 01 19:52:22 addons-357468 kubelet[1230]: I0401 19:52:22.087297    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-bcc9r" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [05d7db7ddc876169018948c401ad97c5240fd3320140c90beb784b8a051a4367] <==
	I0401 19:46:38.288568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:46:38.431728       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:46:38.431828       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:46:38.465022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:46:38.465240       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-357468_c632b4ad-d96e-418f-a44a-7b9c06f7be56!
	I0401 19:46:38.465665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"518478f1-a57a-4d87-b4e2-962ff30457df", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-357468_c632b4ad-d96e-418f-a44a-7b9c06f7be56 became leader
	I0401 19:46:38.569766       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-357468_c632b4ad-d96e-418f-a44a-7b9c06f7be56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-357468 -n addons-357468
helpers_test.go:261: (dbg) Run:  kubectl --context addons-357468 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-9qn8z ingress-nginx-admission-create-swd62 ingress-nginx-admission-patch-klv5s
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-357468 describe pod hello-world-app-7d9564db4-9qn8z ingress-nginx-admission-create-swd62 ingress-nginx-admission-patch-klv5s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-357468 describe pod hello-world-app-7d9564db4-9qn8z ingress-nginx-admission-create-swd62 ingress-nginx-admission-patch-klv5s: exit status 1 (74.935622ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-9qn8z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-357468/192.168.39.65
	Start Time:       Tue, 01 Apr 2025 19:52:21 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8fr7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x8fr7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-9qn8z to addons-357468
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-swd62" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-klv5s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-357468 describe pod hello-world-app-7d9564db4-9qn8z ingress-nginx-admission-create-swd62 ingress-nginx-admission-patch-klv5s: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable ingress-dns --alsologtostderr -v=1: (1.35583817s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable ingress --alsologtostderr -v=1: (7.743457238s)
--- FAIL: TestAddons/parallel/Ingress (160.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls --format json --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 image ls --format json --alsologtostderr: (2.328125415s)
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366801 image ls --format json --alsologtostderr:
[]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366801 image ls --format json --alsologtostderr:
I0401 19:57:52.343152   25533 out.go:345] Setting OutFile to fd 1 ...
I0401 19:57:52.343523   25533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:52.343541   25533 out.go:358] Setting ErrFile to fd 2...
I0401 19:57:52.343547   25533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:52.343733   25533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
I0401 19:57:52.344357   25533 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:52.344447   25533 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:52.344875   25533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:52.344945   25533 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:52.360800   25533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
I0401 19:57:52.361321   25533 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:52.361991   25533 main.go:141] libmachine: Using API Version  1
I0401 19:57:52.362021   25533 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:52.362410   25533 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:52.362631   25533 main.go:141] libmachine: (functional-366801) Calling .GetState
I0401 19:57:52.364389   25533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:52.364432   25533 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:52.380214   25533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
I0401 19:57:52.380857   25533 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:52.381344   25533 main.go:141] libmachine: Using API Version  1
I0401 19:57:52.381364   25533 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:52.381738   25533 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:52.381981   25533 main.go:141] libmachine: (functional-366801) Calling .DriverName
I0401 19:57:52.382184   25533 ssh_runner.go:195] Run: systemctl --version
I0401 19:57:52.382227   25533 main.go:141] libmachine: (functional-366801) Calling .GetSSHHostname
I0401 19:57:52.385301   25533 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:52.385841   25533 main.go:141] libmachine: (functional-366801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4a:0f", ip: ""} in network mk-functional-366801: {Iface:virbr1 ExpiryTime:2025-04-01 20:55:15 +0000 UTC Type:0 Mac:52:54:00:9d:4a:0f Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:functional-366801 Clientid:01:52:54:00:9d:4a:0f}
I0401 19:57:52.385880   25533 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined IP address 192.168.39.138 and MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:52.386008   25533 main.go:141] libmachine: (functional-366801) Calling .GetSSHPort
I0401 19:57:52.386182   25533 main.go:141] libmachine: (functional-366801) Calling .GetSSHKeyPath
I0401 19:57:52.386343   25533 main.go:141] libmachine: (functional-366801) Calling .GetSSHUsername
I0401 19:57:52.386459   25533 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/functional-366801/id_rsa Username:docker}
I0401 19:57:52.527171   25533 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 19:57:54.617004   25533 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.089803987s)
W0401 19:57:54.617077   25533 cache_images.go:734] Failed to list images for profile functional-366801 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0401 19:57:54.604469    8315 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2025-04-01T19:57:54Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0401 19:57:54.617150   25533 main.go:141] libmachine: Making call to close driver server
I0401 19:57:54.617162   25533 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:54.617472   25533 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:54.617489   25533 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 19:57:54.617498   25533 main.go:141] libmachine: Making call to close driver server
I0401 19:57:54.617505   25533 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:54.617514   25533 main.go:141] libmachine: (functional-366801) DBG | Closing plugin on server side
I0401 19:57:54.617751   25533 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:54.617769   25533 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 19:57:54.617793   25533 main.go:141] libmachine: (functional-366801) DBG | Closing plugin on server side
functional_test.go:292: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (2.33s)

                                                
                                    
x
+
TestPreload (208.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-409829 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0401 20:42:27.798939   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:43:49.728830   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-409829 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m42.94283513s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409829 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-409829 image pull gcr.io/k8s-minikube/busybox: (3.834719241s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-409829
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-409829: (7.284906371s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-409829 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0401 20:44:06.659901   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-409829 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m31.776309712s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409829 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-01 20:45:34.6949566 +0000 UTC m=+3634.994705268
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-409829 -n test-preload-409829
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409829 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-409829 logs -n 25: (1.185752747s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-546775 ssh -n                                                                 | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
	|         | multinode-546775-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-546775 ssh -n multinode-546775 sudo cat                                       | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
	|         | /home/docker/cp-test_multinode-546775-m03_multinode-546775.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-546775 cp multinode-546775-m03:/home/docker/cp-test.txt                       | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
	|         | multinode-546775-m02:/home/docker/cp-test_multinode-546775-m03_multinode-546775-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-546775 ssh -n                                                                 | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
	|         | multinode-546775-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-546775 ssh -n multinode-546775-m02 sudo cat                                   | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
	|         | /home/docker/cp-test_multinode-546775-m03_multinode-546775-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-546775 node stop m03                                                          | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:28 UTC |
	| node    | multinode-546775 node start                                                             | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:28 UTC | 01 Apr 25 20:29 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-546775                                                                | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:29 UTC |                     |
	| stop    | -p multinode-546775                                                                     | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:29 UTC | 01 Apr 25 20:32 UTC |
	| start   | -p multinode-546775                                                                     | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:32 UTC | 01 Apr 25 20:35 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-546775                                                                | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:35 UTC |                     |
	| node    | multinode-546775 node delete                                                            | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:35 UTC | 01 Apr 25 20:35 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-546775 stop                                                                   | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:35 UTC | 01 Apr 25 20:38 UTC |
	| start   | -p multinode-546775                                                                     | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-546775                                                                | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:41 UTC |                     |
	| start   | -p multinode-546775-m02                                                                 | multinode-546775-m02 | jenkins | v1.35.0 | 01 Apr 25 20:41 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-546775-m03                                                                 | multinode-546775-m03 | jenkins | v1.35.0 | 01 Apr 25 20:41 UTC | 01 Apr 25 20:42 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-546775                                                                 | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:42 UTC |                     |
	| delete  | -p multinode-546775-m03                                                                 | multinode-546775-m03 | jenkins | v1.35.0 | 01 Apr 25 20:42 UTC | 01 Apr 25 20:42 UTC |
	| delete  | -p multinode-546775                                                                     | multinode-546775     | jenkins | v1.35.0 | 01 Apr 25 20:42 UTC | 01 Apr 25 20:42 UTC |
	| start   | -p test-preload-409829                                                                  | test-preload-409829  | jenkins | v1.35.0 | 01 Apr 25 20:42 UTC | 01 Apr 25 20:43 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-409829 image pull                                                          | test-preload-409829  | jenkins | v1.35.0 | 01 Apr 25 20:43 UTC | 01 Apr 25 20:43 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-409829                                                                  | test-preload-409829  | jenkins | v1.35.0 | 01 Apr 25 20:43 UTC | 01 Apr 25 20:44 UTC |
	| start   | -p test-preload-409829                                                                  | test-preload-409829  | jenkins | v1.35.0 | 01 Apr 25 20:44 UTC | 01 Apr 25 20:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-409829 image list                                                          | test-preload-409829  | jenkins | v1.35.0 | 01 Apr 25 20:45 UTC | 01 Apr 25 20:45 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:44:02
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:44:02.738030   48323 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:44:02.738136   48323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:44:02.738146   48323 out.go:358] Setting ErrFile to fd 2...
	I0401 20:44:02.738153   48323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:44:02.738397   48323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:44:02.738941   48323 out.go:352] Setting JSON to false
	I0401 20:44:02.739795   48323 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5187,"bootTime":1743535056,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:44:02.739886   48323 start.go:139] virtualization: kvm guest
	I0401 20:44:02.741848   48323 out.go:177] * [test-preload-409829] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:44:02.743205   48323 notify.go:220] Checking for updates...
	I0401 20:44:02.743219   48323 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:44:02.744318   48323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:44:02.745682   48323 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:44:02.746955   48323 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:44:02.748259   48323 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:44:02.749419   48323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:44:02.751152   48323 config.go:182] Loaded profile config "test-preload-409829": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0401 20:44:02.751565   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:44:02.751635   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:44:02.766196   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0401 20:44:02.766687   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:44:02.767218   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:44:02.767249   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:44:02.767588   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:44:02.767771   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:02.769618   48323 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0401 20:44:02.770805   48323 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:44:02.771084   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:44:02.771116   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:44:02.786223   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I0401 20:44:02.786608   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:44:02.786995   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:44:02.787012   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:44:02.787320   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:44:02.787486   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:02.823853   48323 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 20:44:02.825226   48323 start.go:297] selected driver: kvm2
	I0401 20:44:02.825239   48323 start.go:901] validating driver "kvm2" against &{Name:test-preload-409829 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-409829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:44:02.825321   48323 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:44:02.826037   48323 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:44:02.826101   48323 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 20:44:02.841264   48323 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 20:44:02.841598   48323 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:44:02.841627   48323 cni.go:84] Creating CNI manager for ""
	I0401 20:44:02.841665   48323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:44:02.841711   48323 start.go:340] cluster config:
	{Name:test-preload-409829 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-409829 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:44:02.841795   48323 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:44:02.843720   48323 out.go:177] * Starting "test-preload-409829" primary control-plane node in "test-preload-409829" cluster
	I0401 20:44:02.844892   48323 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0401 20:44:03.404159   48323 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0401 20:44:03.404191   48323 cache.go:56] Caching tarball of preloaded images
	I0401 20:44:03.404369   48323 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0401 20:44:03.406133   48323 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0401 20:44:03.407492   48323 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0401 20:44:03.523430   48323 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0401 20:44:16.003208   48323 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0401 20:44:16.003301   48323 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0401 20:44:16.845386   48323 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0401 20:44:16.845508   48323 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/config.json ...
	I0401 20:44:16.845766   48323 start.go:360] acquireMachinesLock for test-preload-409829: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 20:44:16.845836   48323 start.go:364] duration metric: took 47.042µs to acquireMachinesLock for "test-preload-409829"
	I0401 20:44:16.845849   48323 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:44:16.845855   48323 fix.go:54] fixHost starting: 
	I0401 20:44:16.846123   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:44:16.846158   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:44:16.860540   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0401 20:44:16.860978   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:44:16.861389   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:44:16.861415   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:44:16.861751   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:44:16.861931   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:16.862091   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetState
	I0401 20:44:16.863713   48323 fix.go:112] recreateIfNeeded on test-preload-409829: state=Stopped err=<nil>
	I0401 20:44:16.863735   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	W0401 20:44:16.863860   48323 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:44:16.866046   48323 out.go:177] * Restarting existing kvm2 VM for "test-preload-409829" ...
	I0401 20:44:16.867516   48323 main.go:141] libmachine: (test-preload-409829) Calling .Start
	I0401 20:44:16.867693   48323 main.go:141] libmachine: (test-preload-409829) starting domain...
	I0401 20:44:16.867714   48323 main.go:141] libmachine: (test-preload-409829) ensuring networks are active...
	I0401 20:44:16.868503   48323 main.go:141] libmachine: (test-preload-409829) Ensuring network default is active
	I0401 20:44:16.868860   48323 main.go:141] libmachine: (test-preload-409829) Ensuring network mk-test-preload-409829 is active
	I0401 20:44:16.869280   48323 main.go:141] libmachine: (test-preload-409829) getting domain XML...
	I0401 20:44:16.869952   48323 main.go:141] libmachine: (test-preload-409829) creating domain...
	I0401 20:44:18.079789   48323 main.go:141] libmachine: (test-preload-409829) waiting for IP...
	I0401 20:44:18.080659   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:18.080994   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:18.081087   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:18.080995   48408 retry.go:31] will retry after 231.646978ms: waiting for domain to come up
	I0401 20:44:18.314426   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:18.314904   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:18.314943   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:18.314886   48408 retry.go:31] will retry after 379.862939ms: waiting for domain to come up
	I0401 20:44:18.696596   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:18.696998   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:18.697022   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:18.696978   48408 retry.go:31] will retry after 464.790656ms: waiting for domain to come up
	I0401 20:44:19.163551   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:19.164022   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:19.164051   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:19.163975   48408 retry.go:31] will retry after 474.854801ms: waiting for domain to come up
	I0401 20:44:19.640227   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:19.640613   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:19.640637   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:19.640576   48408 retry.go:31] will retry after 555.450978ms: waiting for domain to come up
	I0401 20:44:20.197283   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:20.197627   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:20.197681   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:20.197638   48408 retry.go:31] will retry after 632.183316ms: waiting for domain to come up
	I0401 20:44:20.831649   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:20.832041   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:20.832095   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:20.832021   48408 retry.go:31] will retry after 793.858494ms: waiting for domain to come up
	I0401 20:44:21.627091   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:21.627574   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:21.627607   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:21.627549   48408 retry.go:31] will retry after 1.404855883s: waiting for domain to come up
	I0401 20:44:23.034604   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:23.035064   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:23.035084   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:23.035032   48408 retry.go:31] will retry after 1.303018794s: waiting for domain to come up
	I0401 20:44:24.339327   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:24.339693   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:24.339722   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:24.339660   48408 retry.go:31] will retry after 1.569181571s: waiting for domain to come up
	I0401 20:44:25.911416   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:25.911875   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:25.911899   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:25.911834   48408 retry.go:31] will retry after 1.776275512s: waiting for domain to come up
	I0401 20:44:27.690257   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:27.690720   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:27.690758   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:27.690705   48408 retry.go:31] will retry after 2.735777739s: waiting for domain to come up
	I0401 20:44:30.429579   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:30.429996   48323 main.go:141] libmachine: (test-preload-409829) DBG | unable to find current IP address of domain test-preload-409829 in network mk-test-preload-409829
	I0401 20:44:30.430015   48323 main.go:141] libmachine: (test-preload-409829) DBG | I0401 20:44:30.429956   48408 retry.go:31] will retry after 4.00165163s: waiting for domain to come up
	I0401 20:44:34.434923   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.435322   48323 main.go:141] libmachine: (test-preload-409829) found domain IP: 192.168.39.63
	I0401 20:44:34.435342   48323 main.go:141] libmachine: (test-preload-409829) reserving static IP address...
	I0401 20:44:34.435355   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has current primary IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.436028   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "test-preload-409829", mac: "52:54:00:0c:72:f5", ip: "192.168.39.63"} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.436057   48323 main.go:141] libmachine: (test-preload-409829) DBG | skip adding static IP to network mk-test-preload-409829 - found existing host DHCP lease matching {name: "test-preload-409829", mac: "52:54:00:0c:72:f5", ip: "192.168.39.63"}
	I0401 20:44:34.436070   48323 main.go:141] libmachine: (test-preload-409829) reserved static IP address 192.168.39.63 for domain test-preload-409829
	I0401 20:44:34.436081   48323 main.go:141] libmachine: (test-preload-409829) waiting for SSH...
	I0401 20:44:34.436150   48323 main.go:141] libmachine: (test-preload-409829) DBG | Getting to WaitForSSH function...
	I0401 20:44:34.438313   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.438762   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.438786   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.438912   48323 main.go:141] libmachine: (test-preload-409829) DBG | Using SSH client type: external
	I0401 20:44:34.438952   48323 main.go:141] libmachine: (test-preload-409829) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa (-rw-------)
	I0401 20:44:34.438982   48323 main.go:141] libmachine: (test-preload-409829) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 20:44:34.438999   48323 main.go:141] libmachine: (test-preload-409829) DBG | About to run SSH command:
	I0401 20:44:34.439013   48323 main.go:141] libmachine: (test-preload-409829) DBG | exit 0
	I0401 20:44:34.566376   48323 main.go:141] libmachine: (test-preload-409829) DBG | SSH cmd err, output: <nil>: 
	I0401 20:44:34.566812   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetConfigRaw
	I0401 20:44:34.567520   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetIP
	I0401 20:44:34.570114   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.570478   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.570514   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.570805   48323 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/config.json ...
	I0401 20:44:34.571008   48323 machine.go:93] provisionDockerMachine start ...
	I0401 20:44:34.571025   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:34.571237   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:34.573367   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.573686   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.573716   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.573840   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:34.574002   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:34.574148   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:34.574255   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:34.574429   48323 main.go:141] libmachine: Using SSH client type: native
	I0401 20:44:34.574750   48323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0401 20:44:34.574763   48323 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:44:34.687110   48323 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 20:44:34.687144   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetMachineName
	I0401 20:44:34.687410   48323 buildroot.go:166] provisioning hostname "test-preload-409829"
	I0401 20:44:34.687447   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetMachineName
	I0401 20:44:34.687646   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:34.690599   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.690875   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.690912   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.691089   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:34.691307   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:34.691498   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:34.691664   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:34.691812   48323 main.go:141] libmachine: Using SSH client type: native
	I0401 20:44:34.692057   48323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0401 20:44:34.692077   48323 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-409829 && echo "test-preload-409829" | sudo tee /etc/hostname
	I0401 20:44:34.817357   48323 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-409829
	
	I0401 20:44:34.817386   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:34.820142   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.820532   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.820567   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.820695   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:34.820869   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:34.821025   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:34.821127   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:34.821261   48323 main.go:141] libmachine: Using SSH client type: native
	I0401 20:44:34.821453   48323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0401 20:44:34.821468   48323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-409829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-409829/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-409829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:44:34.948110   48323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:44:34.948160   48323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 20:44:34.948183   48323 buildroot.go:174] setting up certificates
	I0401 20:44:34.948195   48323 provision.go:84] configureAuth start
	I0401 20:44:34.948207   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetMachineName
	I0401 20:44:34.948507   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetIP
	I0401 20:44:34.951417   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.951705   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.951734   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.951886   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:34.954141   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.954429   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:34.954457   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:34.954652   48323 provision.go:143] copyHostCerts
	I0401 20:44:34.954721   48323 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 20:44:34.954776   48323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 20:44:34.954889   48323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 20:44:34.955090   48323 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 20:44:34.955105   48323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 20:44:34.955144   48323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 20:44:34.955220   48323 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 20:44:34.955231   48323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 20:44:34.955266   48323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 20:44:34.955344   48323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.test-preload-409829 san=[127.0.0.1 192.168.39.63 localhost minikube test-preload-409829]
	I0401 20:44:35.153260   48323 provision.go:177] copyRemoteCerts
	I0401 20:44:35.153332   48323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:44:35.153362   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:35.156175   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.156477   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:35.156500   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.156744   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:35.156892   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.157002   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:35.157154   48323 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa Username:docker}
	I0401 20:44:35.244809   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:44:35.270197   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:44:35.298693   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:44:35.325821   48323 provision.go:87] duration metric: took 377.612708ms to configureAuth
	I0401 20:44:35.325866   48323 buildroot.go:189] setting minikube options for container-runtime
	I0401 20:44:35.326079   48323 config.go:182] Loaded profile config "test-preload-409829": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0401 20:44:35.326173   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:35.328775   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.329130   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:35.329159   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.329323   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:35.329518   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.329656   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.329768   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:35.329911   48323 main.go:141] libmachine: Using SSH client type: native
	I0401 20:44:35.330096   48323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0401 20:44:35.330110   48323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:44:35.571645   48323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:44:35.571677   48323 machine.go:96] duration metric: took 1.00065682s to provisionDockerMachine
	I0401 20:44:35.571696   48323 start.go:293] postStartSetup for "test-preload-409829" (driver="kvm2")
	I0401 20:44:35.571710   48323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:44:35.571728   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:35.572072   48323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:44:35.572103   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:35.574624   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.574928   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:35.574957   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.575094   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:35.575267   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.575393   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:35.575493   48323 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa Username:docker}
	I0401 20:44:35.661828   48323 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:44:35.666631   48323 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 20:44:35.666660   48323 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 20:44:35.666754   48323 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 20:44:35.666863   48323 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 20:44:35.666985   48323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:44:35.677531   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:44:35.702413   48323 start.go:296] duration metric: took 130.704567ms for postStartSetup
	I0401 20:44:35.702456   48323 fix.go:56] duration metric: took 18.856601738s for fixHost
	I0401 20:44:35.702481   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:35.705447   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.705808   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:35.705847   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.705961   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:35.706123   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.706316   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.706464   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:35.706616   48323 main.go:141] libmachine: Using SSH client type: native
	I0401 20:44:35.706844   48323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0401 20:44:35.706857   48323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 20:44:35.815461   48323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743540275.772587526
	
	I0401 20:44:35.815507   48323 fix.go:216] guest clock: 1743540275.772587526
	I0401 20:44:35.815518   48323 fix.go:229] Guest: 2025-04-01 20:44:35.772587526 +0000 UTC Remote: 2025-04-01 20:44:35.70246388 +0000 UTC m=+33.001268103 (delta=70.123646ms)
	I0401 20:44:35.815560   48323 fix.go:200] guest clock delta is within tolerance: 70.123646ms
	I0401 20:44:35.815571   48323 start.go:83] releasing machines lock for "test-preload-409829", held for 18.969726506s
	I0401 20:44:35.815607   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:35.815836   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetIP
	I0401 20:44:35.818684   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.819046   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:35.819078   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.819222   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:35.819751   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:35.819945   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:44:35.820066   48323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:44:35.820122   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:35.820164   48323 ssh_runner.go:195] Run: cat /version.json
	I0401 20:44:35.820189   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:44:35.822963   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.823133   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.823338   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:35.823366   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.823438   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:35.823474   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:35.823477   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:35.823664   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:44:35.823676   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.823853   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:35.823857   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:44:35.824013   48323 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa Username:docker}
	I0401 20:44:35.824026   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:44:35.824164   48323 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa Username:docker}
	I0401 20:44:35.903648   48323 ssh_runner.go:195] Run: systemctl --version
	I0401 20:44:35.930225   48323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:44:36.082121   48323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 20:44:36.089039   48323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 20:44:36.089130   48323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:44:36.107772   48323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 20:44:36.107798   48323 start.go:495] detecting cgroup driver to use...
	I0401 20:44:36.107865   48323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:44:36.126415   48323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:44:36.141427   48323 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:44:36.141484   48323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:44:36.156195   48323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:44:36.171148   48323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:44:36.290480   48323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:44:36.436184   48323 docker.go:233] disabling docker service ...
	I0401 20:44:36.436241   48323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:44:36.450692   48323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:44:36.464261   48323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:44:36.599154   48323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:44:36.710724   48323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:44:36.724735   48323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:44:36.743840   48323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0401 20:44:36.743905   48323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:44:36.755138   48323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:44:36.755204   48323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:44:36.766264   48323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:44:36.776997   48323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:44:36.787698   48323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:44:36.798843   48323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:44:36.809467   48323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:44:36.827398   48323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:44:36.837920   48323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:44:36.847646   48323 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 20:44:36.847718   48323 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 20:44:36.862117   48323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:44:36.872597   48323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:44:36.999337   48323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:44:37.098933   48323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:44:37.099005   48323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:44:37.104109   48323 start.go:563] Will wait 60s for crictl version
	I0401 20:44:37.104174   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:37.108254   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:44:37.148899   48323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 20:44:37.148980   48323 ssh_runner.go:195] Run: crio --version
	I0401 20:44:37.178303   48323 ssh_runner.go:195] Run: crio --version
	I0401 20:44:37.211619   48323 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0401 20:44:37.213100   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetIP
	I0401 20:44:37.215907   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:37.216219   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:44:37.216246   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:44:37.216437   48323 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 20:44:37.220845   48323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:44:37.234090   48323 kubeadm.go:883] updating cluster {Name:test-preload-409829 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-409829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:44:37.234207   48323 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0401 20:44:37.234288   48323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:44:37.270587   48323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0401 20:44:37.270669   48323 ssh_runner.go:195] Run: which lz4
	I0401 20:44:37.274771   48323 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:44:37.279154   48323 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:44:37.279188   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0401 20:44:38.901506   48323 crio.go:462] duration metric: took 1.626783353s to copy over tarball
	I0401 20:44:38.901592   48323 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:44:41.397413   48323 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.495793409s)
	I0401 20:44:41.397448   48323 crio.go:469] duration metric: took 2.495892396s to extract the tarball
	I0401 20:44:41.397458   48323 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:44:41.439356   48323 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:44:41.481325   48323 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0401 20:44:41.481346   48323 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:44:41.481408   48323 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:44:41.481418   48323 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0401 20:44:41.481435   48323 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0401 20:44:41.481471   48323 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0401 20:44:41.481478   48323 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0401 20:44:41.481472   48323 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0401 20:44:41.481502   48323 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0401 20:44:41.481552   48323 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0401 20:44:41.483055   48323 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:44:41.483064   48323 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0401 20:44:41.483056   48323 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0401 20:44:41.483075   48323 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0401 20:44:41.483059   48323 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0401 20:44:41.483098   48323 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0401 20:44:41.483099   48323 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0401 20:44:41.483127   48323 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0401 20:44:41.618332   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0401 20:44:41.625982   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0401 20:44:41.626357   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0401 20:44:41.630245   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0401 20:44:41.648881   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0401 20:44:41.661145   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0401 20:44:41.686130   48323 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0401 20:44:41.686180   48323 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0401 20:44:41.686239   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:41.687148   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0401 20:44:41.791948   48323 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0401 20:44:41.791999   48323 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0401 20:44:41.792005   48323 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0401 20:44:41.792023   48323 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0401 20:44:41.792062   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:41.792068   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:41.801878   48323 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0401 20:44:41.801927   48323 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0401 20:44:41.801947   48323 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0401 20:44:41.801977   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:41.801980   48323 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0401 20:44:41.801985   48323 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0401 20:44:41.802003   48323 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0401 20:44:41.802022   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:41.802026   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0401 20:44:41.802031   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:41.825509   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0401 20:44:41.825577   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0401 20:44:41.825577   48323 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0401 20:44:41.825588   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0401 20:44:41.825617   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0401 20:44:41.825619   48323 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0401 20:44:41.825681   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0401 20:44:41.825693   48323 ssh_runner.go:195] Run: which crictl
	I0401 20:44:41.920952   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0401 20:44:41.926502   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0401 20:44:41.959973   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0401 20:44:41.960050   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0401 20:44:41.984004   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0401 20:44:41.984058   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0401 20:44:41.984148   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0401 20:44:42.072855   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0401 20:44:42.072902   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0401 20:44:42.115364   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0401 20:44:42.115384   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0401 20:44:42.152540   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0401 20:44:42.164174   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0401 20:44:42.168787   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0401 20:44:42.235876   48323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0401 20:44:42.235983   48323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0401 20:44:42.245790   48323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0401 20:44:42.245894   48323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0401 20:44:42.271325   48323 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0401 20:44:42.285162   48323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0401 20:44:42.285284   48323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0401 20:44:42.314731   48323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0401 20:44:42.314861   48323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0401 20:44:42.338590   48323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0401 20:44:42.338711   48323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0401 20:44:42.340996   48323 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0401 20:44:42.341015   48323 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0401 20:44:42.341050   48323 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0401 20:44:42.341055   48323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0401 20:44:42.341115   48323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0401 20:44:42.341130   48323 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0401 20:44:42.341155   48323 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0401 20:44:42.341170   48323 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0401 20:44:42.341186   48323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0401 20:44:42.341233   48323 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0401 20:44:42.343827   48323 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0401 20:44:42.788141   48323 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:44:45.100463   48323 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.759364405s)
	I0401 20:44:45.100495   48323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0401 20:44:45.100519   48323 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.759310003s)
	I0401 20:44:45.100548   48323 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0401 20:44:45.100527   48323 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0401 20:44:45.100556   48323 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.759307224s)
	I0401 20:44:45.100590   48323 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0401 20:44:45.100608   48323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0401 20:44:45.100623   48323 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.31245371s)
	I0401 20:44:45.852515   48323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0401 20:44:45.852577   48323 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0401 20:44:45.852628   48323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0401 20:44:46.298751   48323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0401 20:44:46.298802   48323 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0401 20:44:46.298852   48323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0401 20:44:47.049502   48323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0401 20:44:47.049552   48323 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0401 20:44:47.049640   48323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0401 20:44:49.200079   48323 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.150418057s)
	I0401 20:44:49.200108   48323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0401 20:44:49.200133   48323 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0401 20:44:49.200180   48323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0401 20:44:50.057164   48323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0401 20:44:50.057213   48323 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0401 20:44:50.057262   48323 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0401 20:44:50.203951   48323 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0401 20:44:50.203995   48323 cache_images.go:123] Successfully loaded all cached images
	I0401 20:44:50.204002   48323 cache_images.go:92] duration metric: took 8.722646364s to LoadCachedImages
	I0401 20:44:50.204017   48323 kubeadm.go:934] updating node { 192.168.39.63 8443 v1.24.4 crio true true} ...
	I0401 20:44:50.204156   48323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-409829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-409829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:44:50.204230   48323 ssh_runner.go:195] Run: crio config
	I0401 20:44:50.257865   48323 cni.go:84] Creating CNI manager for ""
	I0401 20:44:50.257886   48323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:44:50.257897   48323 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:44:50.257920   48323 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.63 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-409829 NodeName:test-preload-409829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:44:50.258035   48323 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-409829"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:44:50.258105   48323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0401 20:44:50.269034   48323 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:44:50.269094   48323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:44:50.279549   48323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0401 20:44:50.297087   48323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:44:50.314269   48323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0401 20:44:50.332319   48323 ssh_runner.go:195] Run: grep 192.168.39.63	control-plane.minikube.internal$ /etc/hosts
	I0401 20:44:50.336310   48323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:44:50.350743   48323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:44:50.496321   48323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:44:50.513951   48323 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829 for IP: 192.168.39.63
	I0401 20:44:50.513976   48323 certs.go:194] generating shared ca certs ...
	I0401 20:44:50.514000   48323 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:44:50.514179   48323 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 20:44:50.514246   48323 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 20:44:50.514261   48323 certs.go:256] generating profile certs ...
	I0401 20:44:50.514363   48323 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/client.key
	I0401 20:44:50.514436   48323 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/apiserver.key.0070b15f
	I0401 20:44:50.514489   48323 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/proxy-client.key
	I0401 20:44:50.514621   48323 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 20:44:50.514656   48323 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 20:44:50.514671   48323 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:44:50.514706   48323 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:44:50.514744   48323 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:44:50.514784   48323 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 20:44:50.514838   48323 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:44:50.515441   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:44:50.572633   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 20:44:50.601430   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:44:50.630686   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:44:50.659487   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:44:50.693931   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:44:50.739372   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:44:50.763242   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:44:50.789578   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:44:50.814792   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 20:44:50.839293   48323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 20:44:50.870620   48323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:44:50.888160   48323 ssh_runner.go:195] Run: openssl version
	I0401 20:44:50.894258   48323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:44:50.905579   48323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:44:50.910409   48323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:44:50.910473   48323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:44:50.916549   48323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:44:50.927679   48323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 20:44:50.938589   48323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 20:44:50.943147   48323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 20:44:50.943197   48323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 20:44:50.949465   48323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 20:44:50.960901   48323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 20:44:50.972799   48323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 20:44:50.978056   48323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 20:44:50.978136   48323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 20:44:50.984507   48323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:44:50.996119   48323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:44:51.000743   48323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:44:51.007056   48323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:44:51.013178   48323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:44:51.019508   48323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:44:51.025354   48323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:44:51.031138   48323 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:44:51.037087   48323 kubeadm.go:392] StartCluster: {Name:test-preload-409829 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
409829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:44:51.037158   48323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:44:51.037194   48323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:44:51.078015   48323 cri.go:89] found id: ""
	I0401 20:44:51.078090   48323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:44:51.089157   48323 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:44:51.089175   48323 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:44:51.089214   48323 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:44:51.099230   48323 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:44:51.099713   48323 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-409829" does not appear in /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:44:51.099878   48323 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-9129/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-409829" cluster setting kubeconfig missing "test-preload-409829" context setting]
	I0401 20:44:51.100291   48323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:44:51.100966   48323 kapi.go:59] client config for test-preload-409829: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/client.crt", KeyFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/client.key", CAFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 20:44:51.101472   48323 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0401 20:44:51.101492   48323 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0401 20:44:51.101499   48323 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0401 20:44:51.101504   48323 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0401 20:44:51.101928   48323 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:44:51.111424   48323 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.63
	I0401 20:44:51.111450   48323 kubeadm.go:1160] stopping kube-system containers ...
	I0401 20:44:51.111475   48323 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 20:44:51.111517   48323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:44:51.150243   48323 cri.go:89] found id: ""
	I0401 20:44:51.150317   48323 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 20:44:51.166727   48323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:44:51.177026   48323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:44:51.177045   48323 kubeadm.go:157] found existing configuration files:
	
	I0401 20:44:51.177098   48323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:44:51.186577   48323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:44:51.186647   48323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:44:51.196288   48323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:44:51.205563   48323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:44:51.205620   48323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:44:51.215213   48323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:44:51.224080   48323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:44:51.224136   48323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:44:51.233880   48323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:44:51.243577   48323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:44:51.243637   48323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:44:51.253308   48323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:44:51.263091   48323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 20:44:51.373734   48323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 20:44:52.336136   48323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 20:44:52.594125   48323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 20:44:52.667301   48323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 20:44:52.751377   48323 api_server.go:52] waiting for apiserver process to appear ...
	I0401 20:44:52.751457   48323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:44:53.252092   48323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:44:53.751754   48323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:44:53.771707   48323 api_server.go:72] duration metric: took 1.020304093s to wait for apiserver process to appear ...
	I0401 20:44:53.771742   48323 api_server.go:88] waiting for apiserver healthz status ...
	I0401 20:44:53.771766   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:44:53.772306   48323 api_server.go:269] stopped: https://192.168.39.63:8443/healthz: Get "https://192.168.39.63:8443/healthz": dial tcp 192.168.39.63:8443: connect: connection refused
	I0401 20:44:54.271998   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:44:59.273388   48323 api_server.go:269] stopped: https://192.168.39.63:8443/healthz: Get "https://192.168.39.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 20:44:59.273431   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:04.274238   48323 api_server.go:269] stopped: https://192.168.39.63:8443/healthz: Get "https://192.168.39.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 20:45:04.274299   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:09.275413   48323 api_server.go:269] stopped: https://192.168.39.63:8443/healthz: Get "https://192.168.39.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 20:45:09.275478   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:14.276453   48323 api_server.go:269] stopped: https://192.168.39.63:8443/healthz: Get "https://192.168.39.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 20:45:14.276560   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:14.564534   48323 api_server.go:269] stopped: https://192.168.39.63:8443/healthz: Get "https://192.168.39.63:8443/healthz": read tcp 192.168.39.1:41418->192.168.39.63:8443: read: connection reset by peer
	I0401 20:45:14.771914   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:14.772593   48323 api_server.go:269] stopped: https://192.168.39.63:8443/healthz: Get "https://192.168.39.63:8443/healthz": dial tcp 192.168.39.63:8443: connect: connection refused
	I0401 20:45:15.272257   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:18.149465   48323 api_server.go:279] https://192.168.39.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 20:45:18.149494   48323 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 20:45:18.149509   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:18.256525   48323 api_server.go:279] https://192.168.39.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:45:18.256554   48323 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:45:18.272824   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:18.296782   48323 api_server.go:279] https://192.168.39.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:45:18.296827   48323 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:45:18.772344   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:18.785908   48323 api_server.go:279] https://192.168.39.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:45:18.785942   48323 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:45:19.272335   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:19.284272   48323 api_server.go:279] https://192.168.39.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:45:19.284301   48323 api_server.go:103] status: https://192.168.39.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:45:19.771832   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:19.777420   48323 api_server.go:279] https://192.168.39.63:8443/healthz returned 200:
	ok
	I0401 20:45:19.784961   48323 api_server.go:141] control plane version: v1.24.4
	I0401 20:45:19.784988   48323 api_server.go:131] duration metric: took 26.013237996s to wait for apiserver health ...
	I0401 20:45:19.784999   48323 cni.go:84] Creating CNI manager for ""
	I0401 20:45:19.785007   48323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:45:19.786852   48323 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 20:45:19.787999   48323 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 20:45:19.808562   48323 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0401 20:45:19.852182   48323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 20:45:19.872734   48323 system_pods.go:59] 7 kube-system pods found
	I0401 20:45:19.872765   48323 system_pods.go:61] "coredns-6d4b75cb6d-zqlfl" [178fe8c3-3aa6-47a0-9933-422bcca6d264] Running
	I0401 20:45:19.872778   48323 system_pods.go:61] "etcd-test-preload-409829" [586e67b8-5fc1-43ae-ba40-e1a2b078cced] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 20:45:19.872784   48323 system_pods.go:61] "kube-apiserver-test-preload-409829" [4b75c9c2-d04b-4e7f-9df6-501f655e7a10] Running
	I0401 20:45:19.872793   48323 system_pods.go:61] "kube-controller-manager-test-preload-409829" [716aec19-5276-4d0b-9068-f2a7213acc53] Running
	I0401 20:45:19.872797   48323 system_pods.go:61] "kube-proxy-fzwb5" [b507fb8e-3384-4276-8d7e-fb33696b5c2f] Running
	I0401 20:45:19.872802   48323 system_pods.go:61] "kube-scheduler-test-preload-409829" [6ee50e1b-68b8-4791-bbdb-946daa5b6c38] Running
	I0401 20:45:19.872809   48323 system_pods.go:61] "storage-provisioner" [ebd38b3b-1253-41c4-ada7-a2db0d2dc032] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 20:45:19.872816   48323 system_pods.go:74] duration metric: took 20.610497ms to wait for pod list to return data ...
	I0401 20:45:19.872830   48323 node_conditions.go:102] verifying NodePressure condition ...
	I0401 20:45:19.883219   48323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 20:45:19.883247   48323 node_conditions.go:123] node cpu capacity is 2
	I0401 20:45:19.883269   48323 node_conditions.go:105] duration metric: took 10.434162ms to run NodePressure ...
	I0401 20:45:19.883284   48323 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 20:45:20.216346   48323 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0401 20:45:20.223385   48323 kubeadm.go:739] kubelet initialised
	I0401 20:45:20.223412   48323 kubeadm.go:740] duration metric: took 7.034867ms waiting for restarted kubelet to initialise ...
	I0401 20:45:20.223422   48323 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:45:20.227106   48323 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zqlfl" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:20.232158   48323 pod_ready.go:98] node "test-preload-409829" hosting pod "coredns-6d4b75cb6d-zqlfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.232189   48323 pod_ready.go:82] duration metric: took 5.052494ms for pod "coredns-6d4b75cb6d-zqlfl" in "kube-system" namespace to be "Ready" ...
	E0401 20:45:20.232208   48323 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409829" hosting pod "coredns-6d4b75cb6d-zqlfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.232222   48323 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:20.238452   48323 pod_ready.go:98] node "test-preload-409829" hosting pod "etcd-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.238474   48323 pod_ready.go:82] duration metric: took 6.2316ms for pod "etcd-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	E0401 20:45:20.238486   48323 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409829" hosting pod "etcd-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.238493   48323 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:20.244661   48323 pod_ready.go:98] node "test-preload-409829" hosting pod "kube-apiserver-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.244687   48323 pod_ready.go:82] duration metric: took 6.183297ms for pod "kube-apiserver-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	E0401 20:45:20.244698   48323 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409829" hosting pod "kube-apiserver-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.244705   48323 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:20.255837   48323 pod_ready.go:98] node "test-preload-409829" hosting pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.255873   48323 pod_ready.go:82] duration metric: took 11.134577ms for pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	E0401 20:45:20.255885   48323 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409829" hosting pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.255894   48323 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fzwb5" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:20.655346   48323 pod_ready.go:98] node "test-preload-409829" hosting pod "kube-proxy-fzwb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.655382   48323 pod_ready.go:82] duration metric: took 399.477011ms for pod "kube-proxy-fzwb5" in "kube-system" namespace to be "Ready" ...
	E0401 20:45:20.655394   48323 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409829" hosting pod "kube-proxy-fzwb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:20.655402   48323 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:21.056315   48323 pod_ready.go:98] node "test-preload-409829" hosting pod "kube-scheduler-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:21.056337   48323 pod_ready.go:82] duration metric: took 400.927682ms for pod "kube-scheduler-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	E0401 20:45:21.056346   48323 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-409829" hosting pod "kube-scheduler-test-preload-409829" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:21.056353   48323 pod_ready.go:39] duration metric: took 832.920603ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:45:21.056378   48323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:45:21.070732   48323 ops.go:34] apiserver oom_adj: -16
	I0401 20:45:21.070757   48323 kubeadm.go:597] duration metric: took 29.981574877s to restartPrimaryControlPlane
	I0401 20:45:21.070767   48323 kubeadm.go:394] duration metric: took 30.033685171s to StartCluster
	I0401 20:45:21.070787   48323 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:45:21.070867   48323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:45:21.071518   48323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:45:21.071760   48323 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.63 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:45:21.071820   48323 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:45:21.071930   48323 addons.go:69] Setting storage-provisioner=true in profile "test-preload-409829"
	I0401 20:45:21.071958   48323 addons.go:69] Setting default-storageclass=true in profile "test-preload-409829"
	I0401 20:45:21.071986   48323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-409829"
	I0401 20:45:21.072027   48323 config.go:182] Loaded profile config "test-preload-409829": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0401 20:45:21.071962   48323 addons.go:238] Setting addon storage-provisioner=true in "test-preload-409829"
	W0401 20:45:21.072057   48323 addons.go:247] addon storage-provisioner should already be in state true
	I0401 20:45:21.072095   48323 host.go:66] Checking if "test-preload-409829" exists ...
	I0401 20:45:21.072316   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:45:21.072360   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:45:21.072451   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:45:21.072505   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:45:21.073615   48323 out.go:177] * Verifying Kubernetes components...
	I0401 20:45:21.074958   48323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:45:21.088507   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0401 20:45:21.088509   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I0401 20:45:21.088992   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:45:21.089044   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:45:21.089452   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:45:21.089470   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:45:21.089596   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:45:21.089619   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:45:21.089884   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:45:21.089972   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:45:21.090063   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetState
	I0401 20:45:21.090545   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:45:21.090602   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:45:21.092288   48323 kapi.go:59] client config for test-preload-409829: &rest.Config{Host:"https://192.168.39.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/client.crt", KeyFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/profiles/test-preload-409829/client.key", CAFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 20:45:21.092591   48323 addons.go:238] Setting addon default-storageclass=true in "test-preload-409829"
	W0401 20:45:21.092606   48323 addons.go:247] addon default-storageclass should already be in state true
	I0401 20:45:21.092634   48323 host.go:66] Checking if "test-preload-409829" exists ...
	I0401 20:45:21.092925   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:45:21.092955   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:45:21.105559   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0401 20:45:21.105997   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:45:21.106457   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:45:21.106472   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:45:21.106803   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:45:21.106984   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetState
	I0401 20:45:21.108606   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:45:21.110718   48323 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:45:21.111811   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0401 20:45:21.112123   48323 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:45:21.112146   48323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:45:21.112166   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:45:21.112339   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:45:21.112865   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:45:21.112890   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:45:21.113281   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:45:21.113868   48323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:45:21.113912   48323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:45:21.115697   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:45:21.116226   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:45:21.116247   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:45:21.116447   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:45:21.116608   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:45:21.116731   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:45:21.116869   48323 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa Username:docker}
	I0401 20:45:21.129110   48323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
	I0401 20:45:21.129741   48323 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:45:21.130261   48323 main.go:141] libmachine: Using API Version  1
	I0401 20:45:21.130284   48323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:45:21.130617   48323 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:45:21.130812   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetState
	I0401 20:45:21.132478   48323 main.go:141] libmachine: (test-preload-409829) Calling .DriverName
	I0401 20:45:21.132720   48323 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:45:21.132742   48323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:45:21.132764   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHHostname
	I0401 20:45:21.135608   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:45:21.136071   48323 main.go:141] libmachine: (test-preload-409829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:72:f5", ip: ""} in network mk-test-preload-409829: {Iface:virbr1 ExpiryTime:2025-04-01 21:44:28 +0000 UTC Type:0 Mac:52:54:00:0c:72:f5 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:test-preload-409829 Clientid:01:52:54:00:0c:72:f5}
	I0401 20:45:21.136101   48323 main.go:141] libmachine: (test-preload-409829) DBG | domain test-preload-409829 has defined IP address 192.168.39.63 and MAC address 52:54:00:0c:72:f5 in network mk-test-preload-409829
	I0401 20:45:21.136273   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHPort
	I0401 20:45:21.136470   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHKeyPath
	I0401 20:45:21.136713   48323 main.go:141] libmachine: (test-preload-409829) Calling .GetSSHUsername
	I0401 20:45:21.136866   48323 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/test-preload-409829/id_rsa Username:docker}
	I0401 20:45:21.264213   48323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:45:21.289085   48323 node_ready.go:35] waiting up to 6m0s for node "test-preload-409829" to be "Ready" ...
	I0401 20:45:21.345217   48323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:45:21.402063   48323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:45:22.377370   48323 main.go:141] libmachine: Making call to close driver server
	I0401 20:45:22.377396   48323 main.go:141] libmachine: (test-preload-409829) Calling .Close
	I0401 20:45:22.377399   48323 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032144637s)
	I0401 20:45:22.377431   48323 main.go:141] libmachine: Making call to close driver server
	I0401 20:45:22.377441   48323 main.go:141] libmachine: (test-preload-409829) Calling .Close
	I0401 20:45:22.377692   48323 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:45:22.377709   48323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:45:22.377718   48323 main.go:141] libmachine: Making call to close driver server
	I0401 20:45:22.377724   48323 main.go:141] libmachine: (test-preload-409829) Calling .Close
	I0401 20:45:22.377774   48323 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:45:22.377787   48323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:45:22.377787   48323 main.go:141] libmachine: (test-preload-409829) DBG | Closing plugin on server side
	I0401 20:45:22.377796   48323 main.go:141] libmachine: Making call to close driver server
	I0401 20:45:22.377805   48323 main.go:141] libmachine: (test-preload-409829) Calling .Close
	I0401 20:45:22.377980   48323 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:45:22.378008   48323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:45:22.378041   48323 main.go:141] libmachine: (test-preload-409829) DBG | Closing plugin on server side
	I0401 20:45:22.378102   48323 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:45:22.378136   48323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:45:22.385406   48323 main.go:141] libmachine: Making call to close driver server
	I0401 20:45:22.385423   48323 main.go:141] libmachine: (test-preload-409829) Calling .Close
	I0401 20:45:22.385650   48323 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:45:22.385663   48323 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:45:22.387405   48323 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:45:22.388511   48323 addons.go:514] duration metric: took 1.316700375s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:45:23.293680   48323 node_ready.go:53] node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:25.792543   48323 node_ready.go:53] node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:27.792686   48323 node_ready.go:53] node "test-preload-409829" has status "Ready":"False"
	I0401 20:45:28.793846   48323 node_ready.go:49] node "test-preload-409829" has status "Ready":"True"
	I0401 20:45:28.793870   48323 node_ready.go:38] duration metric: took 7.504748685s for node "test-preload-409829" to be "Ready" ...
	I0401 20:45:28.793878   48323 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:45:28.797457   48323 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-zqlfl" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:28.801870   48323 pod_ready.go:93] pod "coredns-6d4b75cb6d-zqlfl" in "kube-system" namespace has status "Ready":"True"
	I0401 20:45:28.801893   48323 pod_ready.go:82] duration metric: took 4.408886ms for pod "coredns-6d4b75cb6d-zqlfl" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:28.801902   48323 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:28.806563   48323 pod_ready.go:93] pod "etcd-test-preload-409829" in "kube-system" namespace has status "Ready":"True"
	I0401 20:45:28.806591   48323 pod_ready.go:82] duration metric: took 4.681139ms for pod "etcd-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:28.806603   48323 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:28.810885   48323 pod_ready.go:93] pod "kube-apiserver-test-preload-409829" in "kube-system" namespace has status "Ready":"True"
	I0401 20:45:28.810904   48323 pod_ready.go:82] duration metric: took 4.294488ms for pod "kube-apiserver-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:28.810912   48323 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:30.816859   48323 pod_ready.go:103] pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace has status "Ready":"False"
	I0401 20:45:33.317196   48323 pod_ready.go:103] pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace has status "Ready":"False"
	I0401 20:45:33.820208   48323 pod_ready.go:93] pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace has status "Ready":"True"
	I0401 20:45:33.820236   48323 pod_ready.go:82] duration metric: took 5.009315486s for pod "kube-controller-manager-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:33.820248   48323 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fzwb5" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:33.825162   48323 pod_ready.go:93] pod "kube-proxy-fzwb5" in "kube-system" namespace has status "Ready":"True"
	I0401 20:45:33.825187   48323 pod_ready.go:82] duration metric: took 4.931876ms for pod "kube-proxy-fzwb5" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:33.825199   48323 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:33.829212   48323 pod_ready.go:93] pod "kube-scheduler-test-preload-409829" in "kube-system" namespace has status "Ready":"True"
	I0401 20:45:33.829228   48323 pod_ready.go:82] duration metric: took 4.022569ms for pod "kube-scheduler-test-preload-409829" in "kube-system" namespace to be "Ready" ...
	I0401 20:45:33.829237   48323 pod_ready.go:39] duration metric: took 5.035349749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:45:33.829251   48323 api_server.go:52] waiting for apiserver process to appear ...
	I0401 20:45:33.829307   48323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:45:33.844454   48323 api_server.go:72] duration metric: took 12.7726608s to wait for apiserver process to appear ...
	I0401 20:45:33.844479   48323 api_server.go:88] waiting for apiserver healthz status ...
	I0401 20:45:33.844497   48323 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8443/healthz ...
	I0401 20:45:33.850021   48323 api_server.go:279] https://192.168.39.63:8443/healthz returned 200:
	ok
	I0401 20:45:33.851520   48323 api_server.go:141] control plane version: v1.24.4
	I0401 20:45:33.851541   48323 api_server.go:131] duration metric: took 7.054226ms to wait for apiserver health ...
	I0401 20:45:33.851551   48323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 20:45:33.857644   48323 system_pods.go:59] 7 kube-system pods found
	I0401 20:45:33.857672   48323 system_pods.go:61] "coredns-6d4b75cb6d-zqlfl" [178fe8c3-3aa6-47a0-9933-422bcca6d264] Running
	I0401 20:45:33.857679   48323 system_pods.go:61] "etcd-test-preload-409829" [586e67b8-5fc1-43ae-ba40-e1a2b078cced] Running
	I0401 20:45:33.857684   48323 system_pods.go:61] "kube-apiserver-test-preload-409829" [4b75c9c2-d04b-4e7f-9df6-501f655e7a10] Running
	I0401 20:45:33.857688   48323 system_pods.go:61] "kube-controller-manager-test-preload-409829" [716aec19-5276-4d0b-9068-f2a7213acc53] Running
	I0401 20:45:33.857691   48323 system_pods.go:61] "kube-proxy-fzwb5" [b507fb8e-3384-4276-8d7e-fb33696b5c2f] Running
	I0401 20:45:33.857696   48323 system_pods.go:61] "kube-scheduler-test-preload-409829" [6ee50e1b-68b8-4791-bbdb-946daa5b6c38] Running
	I0401 20:45:33.857701   48323 system_pods.go:61] "storage-provisioner" [ebd38b3b-1253-41c4-ada7-a2db0d2dc032] Running
	I0401 20:45:33.857717   48323 system_pods.go:74] duration metric: took 6.150097ms to wait for pod list to return data ...
	I0401 20:45:33.857730   48323 default_sa.go:34] waiting for default service account to be created ...
	I0401 20:45:33.993551   48323 default_sa.go:45] found service account: "default"
	I0401 20:45:33.993651   48323 default_sa.go:55] duration metric: took 135.909363ms for default service account to be created ...
	I0401 20:45:33.993667   48323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 20:45:34.193935   48323 system_pods.go:86] 7 kube-system pods found
	I0401 20:45:34.193962   48323 system_pods.go:89] "coredns-6d4b75cb6d-zqlfl" [178fe8c3-3aa6-47a0-9933-422bcca6d264] Running
	I0401 20:45:34.193967   48323 system_pods.go:89] "etcd-test-preload-409829" [586e67b8-5fc1-43ae-ba40-e1a2b078cced] Running
	I0401 20:45:34.193971   48323 system_pods.go:89] "kube-apiserver-test-preload-409829" [4b75c9c2-d04b-4e7f-9df6-501f655e7a10] Running
	I0401 20:45:34.193975   48323 system_pods.go:89] "kube-controller-manager-test-preload-409829" [716aec19-5276-4d0b-9068-f2a7213acc53] Running
	I0401 20:45:34.193978   48323 system_pods.go:89] "kube-proxy-fzwb5" [b507fb8e-3384-4276-8d7e-fb33696b5c2f] Running
	I0401 20:45:34.193981   48323 system_pods.go:89] "kube-scheduler-test-preload-409829" [6ee50e1b-68b8-4791-bbdb-946daa5b6c38] Running
	I0401 20:45:34.193987   48323 system_pods.go:89] "storage-provisioner" [ebd38b3b-1253-41c4-ada7-a2db0d2dc032] Running
	I0401 20:45:34.193993   48323 system_pods.go:126] duration metric: took 200.319528ms to wait for k8s-apps to be running ...
	I0401 20:45:34.193999   48323 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 20:45:34.194042   48323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:45:34.209109   48323 system_svc.go:56] duration metric: took 15.102803ms WaitForService to wait for kubelet
	I0401 20:45:34.209135   48323 kubeadm.go:582] duration metric: took 13.137348585s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:45:34.209163   48323 node_conditions.go:102] verifying NodePressure condition ...
	I0401 20:45:34.393848   48323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 20:45:34.393876   48323 node_conditions.go:123] node cpu capacity is 2
	I0401 20:45:34.393887   48323 node_conditions.go:105] duration metric: took 184.719028ms to run NodePressure ...
	I0401 20:45:34.393900   48323 start.go:241] waiting for startup goroutines ...
	I0401 20:45:34.393909   48323 start.go:246] waiting for cluster config update ...
	I0401 20:45:34.393930   48323 start.go:255] writing updated cluster config ...
	I0401 20:45:34.394263   48323 ssh_runner.go:195] Run: rm -f paused
	I0401 20:45:34.439965   48323 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0401 20:45:34.442045   48323 out.go:201] 
	W0401 20:45:34.443394   48323 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0401 20:45:34.444754   48323 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0401 20:45:34.446121   48323 out.go:177] * Done! kubectl is now configured to use "test-preload-409829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.327019313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540335326994490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92602af2-1c6d-41e3-a527-9bf4148e6ce8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.327572290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32722a72-9be3-416b-ae73-32056c631efd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.327650136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32722a72-9be3-416b-ae73-32056c631efd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.327897991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37cfcb55fb77ad9b560bf0ac702d9d21da957fb80cd04b32c340bec1104046bf,PodSandboxId:715a8bd2c7796fabf4aa1aaad4289cb661a28f39f1e69787946a6c3855b985d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743540327088066395,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-zqlfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178fe8c3-3aa6-47a0-9933-422bcca6d264,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7c04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa635b7bf2a76b331a21f6deb8e8489428e5f0ba25105e8872331c744ed61601,PodSandboxId:24731f53d7f008bd92e1b87fd7dd0ead7f5152a94f6e5a7b60065c66e8b3dcd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743540320144915282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzwb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b507fb8e-3384-4276-8d7e-fb33696b5c2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3999178,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7271c8c6ff216ea58c6d71de6ddea878cf75fca132e2ec7b79317413a5d2035a,PodSandboxId:776f27518fd13221d7d9eee0f4562d43e73b7b3bf56b46fa3a89cbe522e7e1e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743540319747803039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd3
8b3b-1253-41c4-ada7-a2db0d2dc032,},Annotations:map[string]string{io.kubernetes.container.hash: 53182211,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cfc749622a09911b920213d02017f8e20f18959ac46c2e35fc8d31c650b418,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743540318921807007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: ec2a953144c33510d95b845efc98b633,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af836510be260283828f95df6f955451b633b0ca9a068f575d3404d7be1017b,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743540314907391174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 314f2cd790673dca5e364db0f70b05e3,},Annotations:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7410445dcd8732c3dfb8d62ec3b371f750152e441f24615eb10209a42df0887,PodSandboxId:00fe7b6eb9cd2f5e6a4c0f960f6dfcef2ab2dce79e58525c278f681139a07e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743540313097281304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c356d6ddeb2e001a363d9749921447,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33ee13a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a984abf465519c973aa92af8e53484a3ed5a921e0f1c3b1e237f19776dc0f534,PodSandboxId:fba45cd64a62d8db480ebeacd7b3d768c90ef318e86a6537c40f7ff7b94dcff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743540293388858983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7932b631a1c4212b58978cfe3cc6e85e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690df6b5d987837f3ff756148ce889c4888ee9a97944fcc03d8601f1bfe54c99,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1743540293415586960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2a953144c33510d95b845efc98b633
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290ee8fc137b535ed3b431b06c475d06f84fe0bbe3202ee4b0facba747f47062,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1743540293403628734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314f2cd790673dca5e364db0f70b05e3,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32722a72-9be3-416b-ae73-32056c631efd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.367128843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=093e669f-0552-4203-bd16-bb7b14cadef3 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.367203251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=093e669f-0552-4203-bd16-bb7b14cadef3 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.368223852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bfb38a9-2690-41d3-b9a2-c9792d6417c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.368649695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540335368628393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bfb38a9-2690-41d3-b9a2-c9792d6417c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.369099800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c3f2986-3f6a-4d04-98e1-5f99830a436d name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.369179241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c3f2986-3f6a-4d04-98e1-5f99830a436d name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.369382332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37cfcb55fb77ad9b560bf0ac702d9d21da957fb80cd04b32c340bec1104046bf,PodSandboxId:715a8bd2c7796fabf4aa1aaad4289cb661a28f39f1e69787946a6c3855b985d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743540327088066395,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-zqlfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178fe8c3-3aa6-47a0-9933-422bcca6d264,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7c04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa635b7bf2a76b331a21f6deb8e8489428e5f0ba25105e8872331c744ed61601,PodSandboxId:24731f53d7f008bd92e1b87fd7dd0ead7f5152a94f6e5a7b60065c66e8b3dcd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743540320144915282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzwb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b507fb8e-3384-4276-8d7e-fb33696b5c2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3999178,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7271c8c6ff216ea58c6d71de6ddea878cf75fca132e2ec7b79317413a5d2035a,PodSandboxId:776f27518fd13221d7d9eee0f4562d43e73b7b3bf56b46fa3a89cbe522e7e1e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743540319747803039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd3
8b3b-1253-41c4-ada7-a2db0d2dc032,},Annotations:map[string]string{io.kubernetes.container.hash: 53182211,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cfc749622a09911b920213d02017f8e20f18959ac46c2e35fc8d31c650b418,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743540318921807007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: ec2a953144c33510d95b845efc98b633,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af836510be260283828f95df6f955451b633b0ca9a068f575d3404d7be1017b,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743540314907391174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 314f2cd790673dca5e364db0f70b05e3,},Annotations:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7410445dcd8732c3dfb8d62ec3b371f750152e441f24615eb10209a42df0887,PodSandboxId:00fe7b6eb9cd2f5e6a4c0f960f6dfcef2ab2dce79e58525c278f681139a07e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743540313097281304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c356d6ddeb2e001a363d9749921447,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33ee13a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a984abf465519c973aa92af8e53484a3ed5a921e0f1c3b1e237f19776dc0f534,PodSandboxId:fba45cd64a62d8db480ebeacd7b3d768c90ef318e86a6537c40f7ff7b94dcff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743540293388858983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7932b631a1c4212b58978cfe3cc6e85e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690df6b5d987837f3ff756148ce889c4888ee9a97944fcc03d8601f1bfe54c99,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1743540293415586960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2a953144c33510d95b845efc98b633
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290ee8fc137b535ed3b431b06c475d06f84fe0bbe3202ee4b0facba747f47062,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1743540293403628734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314f2cd790673dca5e364db0f70b05e3,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c3f2986-3f6a-4d04-98e1-5f99830a436d name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.412777354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a5b416d-8101-4e74-aa2a-2460a227a78a name=/runtime.v1.RuntimeService/Version
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.412931136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a5b416d-8101-4e74-aa2a-2460a227a78a name=/runtime.v1.RuntimeService/Version
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.414049237Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edd21b2c-0a11-4bf2-bc23-bd92d2839336 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.414626033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540335414603381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edd21b2c-0a11-4bf2-bc23-bd92d2839336 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.415256241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f160e7af-3f21-43b1-90df-fda8ab158fe4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.415333346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f160e7af-3f21-43b1-90df-fda8ab158fe4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.415541953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37cfcb55fb77ad9b560bf0ac702d9d21da957fb80cd04b32c340bec1104046bf,PodSandboxId:715a8bd2c7796fabf4aa1aaad4289cb661a28f39f1e69787946a6c3855b985d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743540327088066395,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-zqlfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178fe8c3-3aa6-47a0-9933-422bcca6d264,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7c04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa635b7bf2a76b331a21f6deb8e8489428e5f0ba25105e8872331c744ed61601,PodSandboxId:24731f53d7f008bd92e1b87fd7dd0ead7f5152a94f6e5a7b60065c66e8b3dcd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743540320144915282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzwb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b507fb8e-3384-4276-8d7e-fb33696b5c2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3999178,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7271c8c6ff216ea58c6d71de6ddea878cf75fca132e2ec7b79317413a5d2035a,PodSandboxId:776f27518fd13221d7d9eee0f4562d43e73b7b3bf56b46fa3a89cbe522e7e1e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743540319747803039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd3
8b3b-1253-41c4-ada7-a2db0d2dc032,},Annotations:map[string]string{io.kubernetes.container.hash: 53182211,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cfc749622a09911b920213d02017f8e20f18959ac46c2e35fc8d31c650b418,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743540318921807007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: ec2a953144c33510d95b845efc98b633,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af836510be260283828f95df6f955451b633b0ca9a068f575d3404d7be1017b,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743540314907391174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 314f2cd790673dca5e364db0f70b05e3,},Annotations:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7410445dcd8732c3dfb8d62ec3b371f750152e441f24615eb10209a42df0887,PodSandboxId:00fe7b6eb9cd2f5e6a4c0f960f6dfcef2ab2dce79e58525c278f681139a07e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743540313097281304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c356d6ddeb2e001a363d9749921447,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33ee13a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a984abf465519c973aa92af8e53484a3ed5a921e0f1c3b1e237f19776dc0f534,PodSandboxId:fba45cd64a62d8db480ebeacd7b3d768c90ef318e86a6537c40f7ff7b94dcff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743540293388858983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7932b631a1c4212b58978cfe3cc6e85e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690df6b5d987837f3ff756148ce889c4888ee9a97944fcc03d8601f1bfe54c99,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1743540293415586960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2a953144c33510d95b845efc98b633
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290ee8fc137b535ed3b431b06c475d06f84fe0bbe3202ee4b0facba747f47062,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1743540293403628734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314f2cd790673dca5e364db0f70b05e3,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f160e7af-3f21-43b1-90df-fda8ab158fe4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.450107879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c8516ef-e07e-4263-9d2e-5eba4779fdff name=/runtime.v1.RuntimeService/Version
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.450221058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c8516ef-e07e-4263-9d2e-5eba4779fdff name=/runtime.v1.RuntimeService/Version
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.451405042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac2976f8-c261-4009-9021-c7d1afc093b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.452040773Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540335452016432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac2976f8-c261-4009-9021-c7d1afc093b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.452780483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e56b118e-88ff-4815-8f70-a5ea7b5c1567 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.452910521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e56b118e-88ff-4815-8f70-a5ea7b5c1567 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:45:35 test-preload-409829 crio[676]: time="2025-04-01 20:45:35.453113316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37cfcb55fb77ad9b560bf0ac702d9d21da957fb80cd04b32c340bec1104046bf,PodSandboxId:715a8bd2c7796fabf4aa1aaad4289cb661a28f39f1e69787946a6c3855b985d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1743540327088066395,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-zqlfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 178fe8c3-3aa6-47a0-9933-422bcca6d264,},Annotations:map[string]string{io.kubernetes.container.hash: 69a7c04,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa635b7bf2a76b331a21f6deb8e8489428e5f0ba25105e8872331c744ed61601,PodSandboxId:24731f53d7f008bd92e1b87fd7dd0ead7f5152a94f6e5a7b60065c66e8b3dcd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1743540320144915282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fzwb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b507fb8e-3384-4276-8d7e-fb33696b5c2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3999178,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7271c8c6ff216ea58c6d71de6ddea878cf75fca132e2ec7b79317413a5d2035a,PodSandboxId:776f27518fd13221d7d9eee0f4562d43e73b7b3bf56b46fa3a89cbe522e7e1e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1743540319747803039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd3
8b3b-1253-41c4-ada7-a2db0d2dc032,},Annotations:map[string]string{io.kubernetes.container.hash: 53182211,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cfc749622a09911b920213d02017f8e20f18959ac46c2e35fc8d31c650b418,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1743540318921807007,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: ec2a953144c33510d95b845efc98b633,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af836510be260283828f95df6f955451b633b0ca9a068f575d3404d7be1017b,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1743540314907391174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 314f2cd790673dca5e364db0f70b05e3,},Annotations:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7410445dcd8732c3dfb8d62ec3b371f750152e441f24615eb10209a42df0887,PodSandboxId:00fe7b6eb9cd2f5e6a4c0f960f6dfcef2ab2dce79e58525c278f681139a07e87,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1743540313097281304,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c356d6ddeb2e001a363d9749921447,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33ee13a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a984abf465519c973aa92af8e53484a3ed5a921e0f1c3b1e237f19776dc0f534,PodSandboxId:fba45cd64a62d8db480ebeacd7b3d768c90ef318e86a6537c40f7ff7b94dcff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1743540293388858983,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7932b631a1c4212b58978cfe3cc6e85e,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690df6b5d987837f3ff756148ce889c4888ee9a97944fcc03d8601f1bfe54c99,PodSandboxId:1579b6bac9a5f032b397212ce31fec09894379a374d81e83bcb97db08a0739ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1743540293415586960,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2a953144c33510d95b845efc98b633
,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290ee8fc137b535ed3b431b06c475d06f84fe0bbe3202ee4b0facba747f47062,PodSandboxId:1c5bcf163ffaba7aad34541db419612b0f044b85ca2fb56514f4df2c8ca9c713,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1743540293403628734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-409829,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314f2cd790673dca5e364db0f70b05e3,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 217cc04f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e56b118e-88ff-4815-8f70-a5ea7b5c1567 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37cfcb55fb77a       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   715a8bd2c7796       coredns-6d4b75cb6d-zqlfl
	fa635b7bf2a76       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   24731f53d7f00       kube-proxy-fzwb5
	7271c8c6ff216       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   776f27518fd13       storage-provisioner
	51cfc749622a0       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   16 seconds ago      Running             kube-controller-manager   2                   1579b6bac9a5f       kube-controller-manager-test-preload-409829
	8af836510be26       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            2                   1c5bcf163ffab       kube-apiserver-test-preload-409829
	a7410445dcd87       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   00fe7b6eb9cd2       etcd-test-preload-409829
	690df6b5d9878       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   42 seconds ago      Exited              kube-controller-manager   1                   1579b6bac9a5f       kube-controller-manager-test-preload-409829
	290ee8fc137b5       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   42 seconds ago      Exited              kube-apiserver            1                   1c5bcf163ffab       kube-apiserver-test-preload-409829
	a984abf465519       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   42 seconds ago      Running             kube-scheduler            1                   fba45cd64a62d       kube-scheduler-test-preload-409829
	
	
	==> coredns [37cfcb55fb77ad9b560bf0ac702d9d21da957fb80cd04b32c340bec1104046bf] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46134 - 879 "HINFO IN 3278667069182585121.8561703920805173968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043725437s
	
	
	==> describe nodes <==
	Name:               test-preload-409829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-409829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=test-preload-409829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_43_34_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:43:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-409829
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:45:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:45:28 +0000   Tue, 01 Apr 2025 20:43:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:45:28 +0000   Tue, 01 Apr 2025 20:43:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:45:28 +0000   Tue, 01 Apr 2025 20:43:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Apr 2025 20:45:28 +0000   Tue, 01 Apr 2025 20:45:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    test-preload-409829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 15041f89511941379acf69e172eb07e0
	  System UUID:                15041f89-5119-4137-9acf-69e172eb07e0
	  Boot ID:                    5e9299c1-08a0-4178-9521-bb4fb47ed1db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-zqlfl                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     109s
	  kube-system                 etcd-test-preload-409829                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m1s
	  kube-system                 kube-apiserver-test-preload-409829             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-test-preload-409829    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-fzwb5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-test-preload-409829             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 108s                   kube-proxy       
	  Normal  Starting                 15s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x4 over 2m10s)  kubelet          Node test-preload-409829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m10s (x3 over 2m10s)  kubelet          Node test-preload-409829 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m10s (x4 over 2m10s)  kubelet          Node test-preload-409829 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s                   kubelet          Node test-preload-409829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                   kubelet          Node test-preload-409829 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s                   kubelet          Node test-preload-409829 status is now: NodeHasSufficientPID
	  Normal  NodeReady                111s                   kubelet          Node test-preload-409829 status is now: NodeReady
	  Normal  RegisteredNode           110s                   node-controller  Node test-preload-409829 event: Registered Node test-preload-409829 in Controller
	  Normal  Starting                 43s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)      kubelet          Node test-preload-409829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)      kubelet          Node test-preload-409829 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)      kubelet          Node test-preload-409829 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                     node-controller  Node test-preload-409829 event: Registered Node test-preload-409829 in Controller
	
	
	==> dmesg <==
	[Apr 1 20:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051081] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.943019] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.707009] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.707378] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.507248] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.069693] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064012] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.165815] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.135295] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.288458] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[ +13.495318] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.059912] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.031717] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +6.438185] kauditd_printk_skb: 95 callbacks suppressed
	[Apr 1 20:45] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.170483] systemd-fstab-generator[1888]: Ignoring "noauto" option for root device
	[  +5.715350] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [a7410445dcd8732c3dfb8d62ec3b371f750152e441f24615eb10209a42df0887] <==
	{"level":"info","ts":"2025-04-01T20:45:13.222Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"365d90f3070fcb7b","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-01T20:45:13.222Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-01T20:45:13.223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b switched to configuration voters=(3917446624352127867)"}
	{"level":"info","ts":"2025-04-01T20:45:13.223Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4ca65266b0923ae6","local-member-id":"365d90f3070fcb7b","added-peer-id":"365d90f3070fcb7b","added-peer-peer-urls":["https://192.168.39.63:2380"]}
	{"level":"info","ts":"2025-04-01T20:45:13.223Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4ca65266b0923ae6","local-member-id":"365d90f3070fcb7b","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:45:13.223Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:45:13.224Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-01T20:45:13.224Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"365d90f3070fcb7b","initial-advertise-peer-urls":["https://192.168.39.63:2380"],"listen-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.63:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:45:13.225Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:45:13.225Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"info","ts":"2025-04-01T20:45:13.225Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.63:2380"}
	{"level":"info","ts":"2025-04-01T20:45:14.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:45:14.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:45:14.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b received MsgPreVoteResp from 365d90f3070fcb7b at term 2"}
	{"level":"info","ts":"2025-04-01T20:45:14.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:45:14.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b received MsgVoteResp from 365d90f3070fcb7b at term 3"}
	{"level":"info","ts":"2025-04-01T20:45:14.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"365d90f3070fcb7b became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:45:14.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 365d90f3070fcb7b elected leader 365d90f3070fcb7b at term 3"}
	{"level":"info","ts":"2025-04-01T20:45:14.811Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"365d90f3070fcb7b","local-member-attributes":"{Name:test-preload-409829 ClientURLs:[https://192.168.39.63:2379]}","request-path":"/0/members/365d90f3070fcb7b/attributes","cluster-id":"4ca65266b0923ae6","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:45:14.811Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:45:14.811Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:45:14.813Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:45:14.813Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.63:2379"}
	{"level":"info","ts":"2025-04-01T20:45:14.814Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:45:14.814Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:45:35 up 1 min,  0 users,  load average: 1.04, 0.34, 0.12
	Linux test-preload-409829 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [290ee8fc137b535ed3b431b06c475d06f84fe0bbe3202ee4b0facba747f47062] <==
	I0401 20:44:54.089400       1 server.go:558] external host was not specified, using 192.168.39.63
	I0401 20:44:54.094929       1 server.go:158] Version: v1.24.4
	I0401 20:44:54.094983       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:44:54.517757       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0401 20:44:54.520060       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0401 20:44:54.520135       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0401 20:44:54.521768       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0401 20:44:54.521861       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0401 20:44:54.527888       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:44:55.515482       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:44:55.528768       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:44:56.515939       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:44:57.371701       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:44:58.069456       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:45:00.165745       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:45:00.306656       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:45:03.686929       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:45:03.955480       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:45:10.520312       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0401 20:45:11.357941       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0401 20:45:14.527195       1 run.go:74] "command failed" err="context deadline exceeded"
	
	
	==> kube-apiserver [8af836510be260283828f95df6f955451b633b0ca9a068f575d3404d7be1017b] <==
	I0401 20:45:18.106911       1 controller.go:85] Starting OpenAPI V3 controller
	I0401 20:45:18.107025       1 naming_controller.go:291] Starting NamingConditionController
	I0401 20:45:18.108930       1 establishing_controller.go:76] Starting EstablishingController
	I0401 20:45:18.109118       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0401 20:45:18.109217       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0401 20:45:18.109304       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0401 20:45:18.186269       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0401 20:45:18.195699       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:45:18.195989       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0401 20:45:18.207293       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0401 20:45:18.207370       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	E0401 20:45:18.227804       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0401 20:45:18.270565       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 20:45:18.273969       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 20:45:18.771282       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0401 20:45:18.983424       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:45:19.075323       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:45:20.055210       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0401 20:45:20.069078       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0401 20:45:20.118713       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0401 20:45:20.156473       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:45:20.171280       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:45:20.441648       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0401 20:45:30.677273       1 controller.go:611] quota admission added evaluator for: endpoints
	I0401 20:45:30.726271       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [51cfc749622a09911b920213d02017f8e20f18959ac46c2e35fc8d31c650b418] <==
	W0401 20:45:30.625331       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-409829. Assuming now as a timestamp.
	I0401 20:45:30.625433       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0401 20:45:30.625553       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0401 20:45:30.626282       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0401 20:45:30.627221       1 shared_informer.go:262] Caches are synced for namespace
	I0401 20:45:30.627517       1 event.go:294] "Event occurred" object="test-preload-409829" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-409829 event: Registered Node test-preload-409829 in Controller"
	I0401 20:45:30.627589       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0401 20:45:30.627664       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0401 20:45:30.627699       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0401 20:45:30.635172       1 shared_informer.go:262] Caches are synced for service account
	I0401 20:45:30.641796       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0401 20:45:30.644382       1 shared_informer.go:262] Caches are synced for deployment
	I0401 20:45:30.649609       1 shared_informer.go:262] Caches are synced for attach detach
	I0401 20:45:30.663652       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0401 20:45:30.673379       1 shared_informer.go:262] Caches are synced for disruption
	I0401 20:45:30.673459       1 disruption.go:371] Sending events to api server.
	I0401 20:45:30.707259       1 shared_informer.go:262] Caches are synced for persistent volume
	I0401 20:45:30.784897       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0401 20:45:30.788300       1 shared_informer.go:262] Caches are synced for resource quota
	I0401 20:45:30.836270       1 shared_informer.go:262] Caches are synced for job
	I0401 20:45:30.853996       1 shared_informer.go:262] Caches are synced for cronjob
	I0401 20:45:30.857433       1 shared_informer.go:262] Caches are synced for resource quota
	I0401 20:45:31.277804       1 shared_informer.go:262] Caches are synced for garbage collector
	I0401 20:45:31.320730       1 shared_informer.go:262] Caches are synced for garbage collector
	I0401 20:45:31.320754       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-controller-manager [690df6b5d987837f3ff756148ce889c4888ee9a97944fcc03d8601f1bfe54c99] <==
		/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc000378700, {0x4d02200?, 0xc00000e5a0}, 0x901?)
		/usr/local/go/src/crypto/tls/conn.go:807 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc000378700, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:614 +0x116
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:582
	crypto/tls.(*Conn).Read(0xc000378700, {0xc00103f000, 0x1000, 0x91a200?})
		/usr/local/go/src/crypto/tls/conn.go:1285 +0x16f
	bufio.(*Reader).Read(0xc000275800, {0xc0003ec580, 0x9, 0x936b82?})
		/usr/local/go/src/bufio/bufio.go:236 +0x1b4
	io.ReadAtLeast({0x4cf9b00, 0xc000275800}, {0xc0003ec580, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:331 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:350
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0003ec580?, 0x9?, 0xc001fc4180?}, {0x4cf9b00?, 0xc000275800?})
		vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0003ec540)
		vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000f62f98)
		vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000194c00)
		vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		vendor/golang.org/x/net/http2/transport.go:725 +0xa65
	
	
	==> kube-proxy [fa635b7bf2a76b331a21f6deb8e8489428e5f0ba25105e8872331c744ed61601] <==
	I0401 20:45:20.396616       1 node.go:163] Successfully retrieved node IP: 192.168.39.63
	I0401 20:45:20.396792       1 server_others.go:138] "Detected node IP" address="192.168.39.63"
	I0401 20:45:20.396921       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0401 20:45:20.431773       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0401 20:45:20.431808       1 server_others.go:206] "Using iptables Proxier"
	I0401 20:45:20.432746       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0401 20:45:20.433574       1 server.go:661] "Version info" version="v1.24.4"
	I0401 20:45:20.433590       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:45:20.435233       1 config.go:317] "Starting service config controller"
	I0401 20:45:20.435622       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0401 20:45:20.435719       1 config.go:226] "Starting endpoint slice config controller"
	I0401 20:45:20.435741       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0401 20:45:20.437016       1 config.go:444] "Starting node config controller"
	I0401 20:45:20.437049       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0401 20:45:20.535902       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0401 20:45:20.535904       1 shared_informer.go:262] Caches are synced for service config
	I0401 20:45:20.537280       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [a984abf465519c973aa92af8e53484a3ed5a921e0f1c3b1e237f19776dc0f534] <==
	I0401 20:44:54.284141       1 serving.go:348] Generated self-signed cert in-memory
	W0401 20:45:04.719686       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.39.63:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0401 20:45:04.719726       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:45:04.719732       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:45:18.155034       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0401 20:45:18.155081       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:45:18.166761       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0401 20:45:18.166964       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:45:18.167011       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:45:18.167044       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 20:45:18.269950       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.728691    1133 topology_manager.go:200] "Topology Admit Handler"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.728935    1133 topology_manager.go:200] "Topology Admit Handler"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.729065    1133 topology_manager.go:200] "Topology Admit Handler"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: E0401 20:45:18.730714    1133 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-zqlfl" podUID=178fe8c3-3aa6-47a0-9933-422bcca6d264
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.824596    1133 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=798cf616-42df-4733-8e1c-9c38b047888e path="/var/lib/kubelet/pods/798cf616-42df-4733-8e1c-9c38b047888e/volumes"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906457    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b507fb8e-3384-4276-8d7e-fb33696b5c2f-lib-modules\") pod \"kube-proxy-fzwb5\" (UID: \"b507fb8e-3384-4276-8d7e-fb33696b5c2f\") " pod="kube-system/kube-proxy-fzwb5"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906530    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ebd38b3b-1253-41c4-ada7-a2db0d2dc032-tmp\") pod \"storage-provisioner\" (UID: \"ebd38b3b-1253-41c4-ada7-a2db0d2dc032\") " pod="kube-system/storage-provisioner"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906554    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97v8k\" (UniqueName: \"kubernetes.io/projected/ebd38b3b-1253-41c4-ada7-a2db0d2dc032-kube-api-access-97v8k\") pod \"storage-provisioner\" (UID: \"ebd38b3b-1253-41c4-ada7-a2db0d2dc032\") " pod="kube-system/storage-provisioner"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906574    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b507fb8e-3384-4276-8d7e-fb33696b5c2f-kube-proxy\") pod \"kube-proxy-fzwb5\" (UID: \"b507fb8e-3384-4276-8d7e-fb33696b5c2f\") " pod="kube-system/kube-proxy-fzwb5"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906595    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b507fb8e-3384-4276-8d7e-fb33696b5c2f-xtables-lock\") pod \"kube-proxy-fzwb5\" (UID: \"b507fb8e-3384-4276-8d7e-fb33696b5c2f\") " pod="kube-system/kube-proxy-fzwb5"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906613    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume\") pod \"coredns-6d4b75cb6d-zqlfl\" (UID: \"178fe8c3-3aa6-47a0-9933-422bcca6d264\") " pod="kube-system/coredns-6d4b75cb6d-zqlfl"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906638    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57w8\" (UniqueName: \"kubernetes.io/projected/178fe8c3-3aa6-47a0-9933-422bcca6d264-kube-api-access-l57w8\") pod \"coredns-6d4b75cb6d-zqlfl\" (UID: \"178fe8c3-3aa6-47a0-9933-422bcca6d264\") " pod="kube-system/coredns-6d4b75cb6d-zqlfl"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906658    1133 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m7kg\" (UniqueName: \"kubernetes.io/projected/b507fb8e-3384-4276-8d7e-fb33696b5c2f-kube-api-access-4m7kg\") pod \"kube-proxy-fzwb5\" (UID: \"b507fb8e-3384-4276-8d7e-fb33696b5c2f\") " pod="kube-system/kube-proxy-fzwb5"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.906671    1133 reconciler.go:159] "Reconciler: start to sync state"
	Apr 01 20:45:18 test-preload-409829 kubelet[1133]: I0401 20:45:18.912156    1133 scope.go:110] "RemoveContainer" containerID="690df6b5d987837f3ff756148ce889c4888ee9a97944fcc03d8601f1bfe54c99"
	Apr 01 20:45:19 test-preload-409829 kubelet[1133]: E0401 20:45:19.013222    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 01 20:45:19 test-preload-409829 kubelet[1133]: E0401 20:45:19.013333    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume podName:178fe8c3-3aa6-47a0-9933-422bcca6d264 nodeName:}" failed. No retries permitted until 2025-04-01 20:45:19.513302419 +0000 UTC m=+26.954099244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume") pod "coredns-6d4b75cb6d-zqlfl" (UID: "178fe8c3-3aa6-47a0-9933-422bcca6d264") : object "kube-system"/"coredns" not registered
	Apr 01 20:45:19 test-preload-409829 kubelet[1133]: E0401 20:45:19.515523    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 01 20:45:19 test-preload-409829 kubelet[1133]: E0401 20:45:19.515724    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume podName:178fe8c3-3aa6-47a0-9933-422bcca6d264 nodeName:}" failed. No retries permitted until 2025-04-01 20:45:20.515706584 +0000 UTC m=+27.956503392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume") pod "coredns-6d4b75cb6d-zqlfl" (UID: "178fe8c3-3aa6-47a0-9933-422bcca6d264") : object "kube-system"/"coredns" not registered
	Apr 01 20:45:19 test-preload-409829 kubelet[1133]: E0401 20:45:19.814124    1133 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-zqlfl" podUID=178fe8c3-3aa6-47a0-9933-422bcca6d264
	Apr 01 20:45:20 test-preload-409829 kubelet[1133]: E0401 20:45:20.526342    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 01 20:45:20 test-preload-409829 kubelet[1133]: E0401 20:45:20.526452    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume podName:178fe8c3-3aa6-47a0-9933-422bcca6d264 nodeName:}" failed. No retries permitted until 2025-04-01 20:45:22.526437407 +0000 UTC m=+29.967234213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume") pod "coredns-6d4b75cb6d-zqlfl" (UID: "178fe8c3-3aa6-47a0-9933-422bcca6d264") : object "kube-system"/"coredns" not registered
	Apr 01 20:45:21 test-preload-409829 kubelet[1133]: E0401 20:45:21.814781    1133 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-zqlfl" podUID=178fe8c3-3aa6-47a0-9933-422bcca6d264
	Apr 01 20:45:22 test-preload-409829 kubelet[1133]: E0401 20:45:22.545459    1133 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 01 20:45:22 test-preload-409829 kubelet[1133]: E0401 20:45:22.545572    1133 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume podName:178fe8c3-3aa6-47a0-9933-422bcca6d264 nodeName:}" failed. No retries permitted until 2025-04-01 20:45:26.545553099 +0000 UTC m=+33.986349917 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/178fe8c3-3aa6-47a0-9933-422bcca6d264-config-volume") pod "coredns-6d4b75cb6d-zqlfl" (UID: "178fe8c3-3aa6-47a0-9933-422bcca6d264") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [7271c8c6ff216ea58c6d71de6ddea878cf75fca132e2ec7b79317413a5d2035a] <==
	I0401 20:45:19.910799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-409829 -n test-preload-409829
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-409829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-409829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-409829
--- FAIL: TestPreload (208.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (397.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m29.360859997s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-881088] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-881088" primary control-plane node in "kubernetes-upgrade-881088" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:47:33.245382   49910 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:47:33.245642   49910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:47:33.245653   49910 out.go:358] Setting ErrFile to fd 2...
	I0401 20:47:33.245657   49910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:47:33.245906   49910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:47:33.246508   49910 out.go:352] Setting JSON to false
	I0401 20:47:33.247681   49910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5397,"bootTime":1743535056,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:47:33.247768   49910 start.go:139] virtualization: kvm guest
	I0401 20:47:33.249675   49910 out.go:177] * [kubernetes-upgrade-881088] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:47:33.251112   49910 notify.go:220] Checking for updates...
	I0401 20:47:33.252058   49910 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:47:33.253282   49910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:47:33.255116   49910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:47:33.257726   49910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:47:33.259707   49910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:47:33.260903   49910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:47:33.262429   49910 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:47:33.298583   49910 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 20:47:33.300238   49910 start.go:297] selected driver: kvm2
	I0401 20:47:33.300256   49910 start.go:901] validating driver "kvm2" against <nil>
	I0401 20:47:33.300283   49910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:47:33.300976   49910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:47:33.301043   49910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 20:47:33.317649   49910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 20:47:33.317722   49910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:47:33.318026   49910 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 20:47:33.318070   49910 cni.go:84] Creating CNI manager for ""
	I0401 20:47:33.318118   49910 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:47:33.318127   49910 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 20:47:33.318179   49910 start.go:340] cluster config:
	{Name:kubernetes-upgrade-881088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-881088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:47:33.318407   49910 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:47:33.320347   49910 out.go:177] * Starting "kubernetes-upgrade-881088" primary control-plane node in "kubernetes-upgrade-881088" cluster
	I0401 20:47:33.322146   49910 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:47:33.322196   49910 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 20:47:33.322205   49910 cache.go:56] Caching tarball of preloaded images
	I0401 20:47:33.322311   49910 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:47:33.322326   49910 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 20:47:33.322764   49910 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/config.json ...
	I0401 20:47:33.322801   49910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/config.json: {Name:mkeb410503ced6e8e206943d074426a39d7bbad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:47:33.322990   49910 start.go:360] acquireMachinesLock for kubernetes-upgrade-881088: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 20:47:33.323031   49910 start.go:364] duration metric: took 24.069µs to acquireMachinesLock for "kubernetes-upgrade-881088"
	I0401 20:47:33.323046   49910 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-881088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-881088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:47:33.323098   49910 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 20:47:33.324898   49910 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 20:47:33.325055   49910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:47:33.325107   49910 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:47:33.341176   49910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I0401 20:47:33.341609   49910 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:47:33.342182   49910 main.go:141] libmachine: Using API Version  1
	I0401 20:47:33.342206   49910 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:47:33.342587   49910 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:47:33.342802   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetMachineName
	I0401 20:47:33.342972   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:33.343143   49910 start.go:159] libmachine.API.Create for "kubernetes-upgrade-881088" (driver="kvm2")
	I0401 20:47:33.343173   49910 client.go:168] LocalClient.Create starting
	I0401 20:47:33.343202   49910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 20:47:33.343236   49910 main.go:141] libmachine: Decoding PEM data...
	I0401 20:47:33.343249   49910 main.go:141] libmachine: Parsing certificate...
	I0401 20:47:33.343297   49910 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 20:47:33.343316   49910 main.go:141] libmachine: Decoding PEM data...
	I0401 20:47:33.343334   49910 main.go:141] libmachine: Parsing certificate...
	I0401 20:47:33.343357   49910 main.go:141] libmachine: Running pre-create checks...
	I0401 20:47:33.343367   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .PreCreateCheck
	I0401 20:47:33.343743   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetConfigRaw
	I0401 20:47:33.344105   49910 main.go:141] libmachine: Creating machine...
	I0401 20:47:33.344118   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .Create
	I0401 20:47:33.344257   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) creating KVM machine...
	I0401 20:47:33.344272   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) creating network...
	I0401 20:47:33.345455   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found existing default KVM network
	I0401 20:47:33.346095   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:33.345936   49968 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00019d530}
	I0401 20:47:33.346186   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | created network xml: 
	I0401 20:47:33.346208   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | <network>
	I0401 20:47:33.346255   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |   <name>mk-kubernetes-upgrade-881088</name>
	I0401 20:47:33.346273   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |   <dns enable='no'/>
	I0401 20:47:33.346281   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |   
	I0401 20:47:33.346292   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 20:47:33.346300   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |     <dhcp>
	I0401 20:47:33.346312   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 20:47:33.346323   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |     </dhcp>
	I0401 20:47:33.346334   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |   </ip>
	I0401 20:47:33.346341   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG |   
	I0401 20:47:33.346347   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | </network>
	I0401 20:47:33.346357   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | 
	I0401 20:47:33.351352   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | trying to create private KVM network mk-kubernetes-upgrade-881088 192.168.39.0/24...
	I0401 20:47:33.420870   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | private KVM network mk-kubernetes-upgrade-881088 192.168.39.0/24 created
	I0401 20:47:33.420895   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:33.420839   49968 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:47:33.420904   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088 ...
	I0401 20:47:33.420915   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 20:47:33.420944   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 20:47:33.676961   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:33.676836   49968 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa...
	I0401 20:47:33.947757   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:33.947641   49968 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/kubernetes-upgrade-881088.rawdisk...
	I0401 20:47:33.947778   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Writing magic tar header
	I0401 20:47:33.947808   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Writing SSH key tar header
	I0401 20:47:33.947820   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:33.947753   49968 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088 ...
	I0401 20:47:33.947836   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088
	I0401 20:47:33.947856   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088 (perms=drwx------)
	I0401 20:47:33.947872   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 20:47:33.947880   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:47:33.947888   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 20:47:33.947897   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 20:47:33.947903   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 20:47:33.947912   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 20:47:33.947922   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 20:47:33.947930   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) creating domain...
	I0401 20:47:33.947943   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 20:47:33.947953   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 20:47:33.947975   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | checking permissions on dir: /home/jenkins
	I0401 20:47:33.947992   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | checking permissions on dir: /home
	I0401 20:47:33.948006   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | skipping /home - not owner
	I0401 20:47:33.949153   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) define libvirt domain using xml: 
	I0401 20:47:33.949190   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) <domain type='kvm'>
	I0401 20:47:33.949214   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   <name>kubernetes-upgrade-881088</name>
	I0401 20:47:33.949232   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   <memory unit='MiB'>2200</memory>
	I0401 20:47:33.949241   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   <vcpu>2</vcpu>
	I0401 20:47:33.949252   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   <features>
	I0401 20:47:33.949261   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <acpi/>
	I0401 20:47:33.949271   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <apic/>
	I0401 20:47:33.949283   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <pae/>
	I0401 20:47:33.949293   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     
	I0401 20:47:33.949304   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   </features>
	I0401 20:47:33.949314   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   <cpu mode='host-passthrough'>
	I0401 20:47:33.949319   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   
	I0401 20:47:33.949325   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   </cpu>
	I0401 20:47:33.949331   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   <os>
	I0401 20:47:33.949337   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <type>hvm</type>
	I0401 20:47:33.949360   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <boot dev='cdrom'/>
	I0401 20:47:33.949381   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <boot dev='hd'/>
	I0401 20:47:33.949393   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <bootmenu enable='no'/>
	I0401 20:47:33.949403   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   </os>
	I0401 20:47:33.949411   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   <devices>
	I0401 20:47:33.949422   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <disk type='file' device='cdrom'>
	I0401 20:47:33.949442   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/boot2docker.iso'/>
	I0401 20:47:33.949454   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <target dev='hdc' bus='scsi'/>
	I0401 20:47:33.949462   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <readonly/>
	I0401 20:47:33.949471   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     </disk>
	I0401 20:47:33.949488   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <disk type='file' device='disk'>
	I0401 20:47:33.949501   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 20:47:33.949519   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/kubernetes-upgrade-881088.rawdisk'/>
	I0401 20:47:33.949534   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <target dev='hda' bus='virtio'/>
	I0401 20:47:33.949545   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     </disk>
	I0401 20:47:33.949554   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <interface type='network'>
	I0401 20:47:33.949562   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <source network='mk-kubernetes-upgrade-881088'/>
	I0401 20:47:33.949577   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <model type='virtio'/>
	I0401 20:47:33.949589   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     </interface>
	I0401 20:47:33.949605   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <interface type='network'>
	I0401 20:47:33.949617   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <source network='default'/>
	I0401 20:47:33.949627   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <model type='virtio'/>
	I0401 20:47:33.949635   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     </interface>
	I0401 20:47:33.949649   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <serial type='pty'>
	I0401 20:47:33.949658   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <target port='0'/>
	I0401 20:47:33.949663   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     </serial>
	I0401 20:47:33.949679   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <console type='pty'>
	I0401 20:47:33.949691   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <target type='serial' port='0'/>
	I0401 20:47:33.949700   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     </console>
	I0401 20:47:33.949710   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     <rng model='virtio'>
	I0401 20:47:33.949735   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)       <backend model='random'>/dev/random</backend>
	I0401 20:47:33.949762   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     </rng>
	I0401 20:47:33.949775   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     
	I0401 20:47:33.949781   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)     
	I0401 20:47:33.949787   49910 main.go:141] libmachine: (kubernetes-upgrade-881088)   </devices>
	I0401 20:47:33.949804   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) </domain>
	I0401 20:47:33.949816   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) 
	I0401 20:47:33.953924   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:7d:17:2d in network default
	I0401 20:47:33.954580   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) starting domain...
	I0401 20:47:33.954594   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:33.954600   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) ensuring networks are active...
	I0401 20:47:33.955400   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Ensuring network default is active
	I0401 20:47:33.955648   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Ensuring network mk-kubernetes-upgrade-881088 is active
	I0401 20:47:33.956388   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) getting domain XML...
	I0401 20:47:33.957109   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) creating domain...
	I0401 20:47:35.258552   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) waiting for IP...
	I0401 20:47:35.259302   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:35.259655   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:35.259722   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:35.259658   49968 retry.go:31] will retry after 245.541861ms: waiting for domain to come up
	I0401 20:47:35.507166   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:35.507611   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:35.507638   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:35.507592   49968 retry.go:31] will retry after 356.294636ms: waiting for domain to come up
	I0401 20:47:35.865280   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:35.865778   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:35.865801   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:35.865691   49968 retry.go:31] will retry after 357.525913ms: waiting for domain to come up
	I0401 20:47:36.225381   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:36.225868   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:36.225903   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:36.225837   49968 retry.go:31] will retry after 540.003633ms: waiting for domain to come up
	I0401 20:47:36.767924   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:36.768372   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:36.768400   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:36.768338   49968 retry.go:31] will retry after 520.827153ms: waiting for domain to come up
	I0401 20:47:37.291044   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:37.291489   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:37.291528   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:37.291469   49968 retry.go:31] will retry after 591.031677ms: waiting for domain to come up
	I0401 20:47:37.884220   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:37.884725   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:37.884787   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:37.884727   49968 retry.go:31] will retry after 944.37471ms: waiting for domain to come up
	I0401 20:47:38.830428   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:38.830904   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:38.830933   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:38.830880   49968 retry.go:31] will retry after 1.230280149s: waiting for domain to come up
	I0401 20:47:40.063456   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:40.063980   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:40.064007   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:40.063946   49968 retry.go:31] will retry after 1.803867664s: waiting for domain to come up
	I0401 20:47:41.869263   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:41.869763   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:41.869836   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:41.869748   49968 retry.go:31] will retry after 2.018147278s: waiting for domain to come up
	I0401 20:47:43.889368   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:43.889848   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:43.889876   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:43.889804   49968 retry.go:31] will retry after 2.83782435s: waiting for domain to come up
	I0401 20:47:46.730665   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:46.731040   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:46.731082   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:46.731007   49968 retry.go:31] will retry after 3.147347729s: waiting for domain to come up
	I0401 20:47:49.880503   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:49.880861   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find current IP address of domain kubernetes-upgrade-881088 in network mk-kubernetes-upgrade-881088
	I0401 20:47:49.880942   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | I0401 20:47:49.880840   49968 retry.go:31] will retry after 4.532281912s: waiting for domain to come up
	I0401 20:47:54.414844   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.415346   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) found domain IP: 192.168.39.185
	I0401 20:47:54.415376   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has current primary IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.415388   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) reserving static IP address...
	I0401 20:47:54.415690   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-881088", mac: "52:54:00:c1:3e:fe", ip: "192.168.39.185"} in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.493769   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) reserved static IP address 192.168.39.185 for domain kubernetes-upgrade-881088
	I0401 20:47:54.493824   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) waiting for SSH...
	I0401 20:47:54.493834   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Getting to WaitForSSH function...
	I0401 20:47:54.496307   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.496677   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:54.496705   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.496949   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Using SSH client type: external
	I0401 20:47:54.497012   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa (-rw-------)
	I0401 20:47:54.497061   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 20:47:54.497078   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | About to run SSH command:
	I0401 20:47:54.497087   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | exit 0
	I0401 20:47:54.626362   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | SSH cmd err, output: <nil>: 
	I0401 20:47:54.626884   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) KVM machine creation complete
	I0401 20:47:54.627142   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetConfigRaw
	I0401 20:47:54.627653   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:54.627891   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:54.628035   49910 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 20:47:54.628050   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetState
	I0401 20:47:54.629471   49910 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 20:47:54.629488   49910 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 20:47:54.629496   49910 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 20:47:54.629522   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:54.632953   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.633986   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:54.634020   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.634200   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:54.634399   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:54.634588   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:54.634761   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:54.634939   49910 main.go:141] libmachine: Using SSH client type: native
	I0401 20:47:54.635231   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0401 20:47:54.635261   49910 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 20:47:54.753450   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:47:54.753476   49910 main.go:141] libmachine: Detecting the provisioner...
	I0401 20:47:54.753487   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:54.756116   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.756449   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:54.756472   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.756643   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:54.756811   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:54.756932   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:54.757088   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:54.757221   49910 main.go:141] libmachine: Using SSH client type: native
	I0401 20:47:54.757410   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0401 20:47:54.757427   49910 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 20:47:54.875574   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 20:47:54.875705   49910 main.go:141] libmachine: found compatible host: buildroot
	I0401 20:47:54.875720   49910 main.go:141] libmachine: Provisioning with buildroot...
	I0401 20:47:54.875728   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetMachineName
	I0401 20:47:54.876029   49910 buildroot.go:166] provisioning hostname "kubernetes-upgrade-881088"
	I0401 20:47:54.876066   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetMachineName
	I0401 20:47:54.876266   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:54.879555   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.879866   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:54.879899   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:54.880129   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:54.880321   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:54.880494   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:54.880749   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:54.880911   49910 main.go:141] libmachine: Using SSH client type: native
	I0401 20:47:54.881108   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0401 20:47:54.881120   49910 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-881088 && echo "kubernetes-upgrade-881088" | sudo tee /etc/hostname
	I0401 20:47:55.014489   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-881088
	
	I0401 20:47:55.014526   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:55.017291   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.017657   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.017695   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.017839   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:55.018035   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:55.018235   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:55.018342   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:55.018515   49910 main.go:141] libmachine: Using SSH client type: native
	I0401 20:47:55.018764   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0401 20:47:55.018780   49910 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-881088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-881088/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-881088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:47:55.144465   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:47:55.144503   49910 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 20:47:55.144530   49910 buildroot.go:174] setting up certificates
	I0401 20:47:55.144560   49910 provision.go:84] configureAuth start
	I0401 20:47:55.144575   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetMachineName
	I0401 20:47:55.144900   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetIP
	I0401 20:47:55.147701   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.148153   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.148181   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.148372   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:55.150907   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.151317   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.151388   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.151495   49910 provision.go:143] copyHostCerts
	I0401 20:47:55.151558   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 20:47:55.151587   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 20:47:55.151698   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 20:47:55.151835   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 20:47:55.151847   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 20:47:55.151884   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 20:47:55.151963   49910 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 20:47:55.151974   49910 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 20:47:55.152008   49910 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 20:47:55.152097   49910 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-881088 san=[127.0.0.1 192.168.39.185 kubernetes-upgrade-881088 localhost minikube]
	I0401 20:47:55.405989   49910 provision.go:177] copyRemoteCerts
	I0401 20:47:55.406046   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:47:55.406069   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:55.408672   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.409022   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.409050   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.409283   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:55.409496   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:55.409642   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:55.409788   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa Username:docker}
	I0401 20:47:55.501014   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:47:55.535180   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0401 20:47:55.560561   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:47:55.586618   49910 provision.go:87] duration metric: took 442.044935ms to configureAuth
	I0401 20:47:55.586647   49910 buildroot.go:189] setting minikube options for container-runtime
	I0401 20:47:55.586875   49910 config.go:182] Loaded profile config "kubernetes-upgrade-881088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:47:55.586951   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:55.589804   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.590257   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.590288   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.590505   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:55.590722   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:55.590859   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:55.591055   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:55.591223   49910 main.go:141] libmachine: Using SSH client type: native
	I0401 20:47:55.591495   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0401 20:47:55.591526   49910 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:47:55.844412   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:47:55.844439   49910 main.go:141] libmachine: Checking connection to Docker...
	I0401 20:47:55.844451   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetURL
	I0401 20:47:55.845802   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | using libvirt version 6000000
	I0401 20:47:55.848033   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.848451   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.848484   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.848672   49910 main.go:141] libmachine: Docker is up and running!
	I0401 20:47:55.848689   49910 main.go:141] libmachine: Reticulating splines...
	I0401 20:47:55.848697   49910 client.go:171] duration metric: took 22.505515477s to LocalClient.Create
	I0401 20:47:55.848726   49910 start.go:167] duration metric: took 22.505584063s to libmachine.API.Create "kubernetes-upgrade-881088"
	I0401 20:47:55.848739   49910 start.go:293] postStartSetup for "kubernetes-upgrade-881088" (driver="kvm2")
	I0401 20:47:55.848754   49910 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:47:55.848780   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:55.849023   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:47:55.849045   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:55.851104   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.851452   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.851486   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.851593   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:55.851754   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:55.851882   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:55.851985   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa Username:docker}
	I0401 20:47:55.949494   49910 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:47:55.955095   49910 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 20:47:55.955122   49910 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 20:47:55.955187   49910 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 20:47:55.955285   49910 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 20:47:55.955474   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:47:55.966190   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:47:55.992413   49910 start.go:296] duration metric: took 143.657948ms for postStartSetup
	I0401 20:47:55.992465   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetConfigRaw
	I0401 20:47:55.993168   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetIP
	I0401 20:47:55.995815   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.996156   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.996189   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.996435   49910 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/config.json ...
	I0401 20:47:55.996609   49910 start.go:128] duration metric: took 22.673502012s to createHost
	I0401 20:47:55.996630   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:55.999042   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.999433   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:55.999462   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:55.999584   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:55.999778   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:55.999937   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:56.000045   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:56.000208   49910 main.go:141] libmachine: Using SSH client type: native
	I0401 20:47:56.000491   49910 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0401 20:47:56.000504   49910 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 20:47:56.119690   49910 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743540476.089530298
	
	I0401 20:47:56.119711   49910 fix.go:216] guest clock: 1743540476.089530298
	I0401 20:47:56.119719   49910 fix.go:229] Guest: 2025-04-01 20:47:56.089530298 +0000 UTC Remote: 2025-04-01 20:47:55.996620175 +0000 UTC m=+22.809170735 (delta=92.910123ms)
	I0401 20:47:56.119744   49910 fix.go:200] guest clock delta is within tolerance: 92.910123ms
	I0401 20:47:56.119750   49910 start.go:83] releasing machines lock for "kubernetes-upgrade-881088", held for 22.796713103s
	I0401 20:47:56.119779   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:56.120031   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetIP
	I0401 20:47:56.123140   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:56.123531   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:56.123564   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:56.123750   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:56.124338   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:56.124568   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:47:56.124664   49910 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:47:56.124701   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:56.124831   49910 ssh_runner.go:195] Run: cat /version.json
	I0401 20:47:56.124865   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:47:56.127900   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:56.128054   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:56.128306   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:56.128334   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:56.128360   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:56.128372   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:56.128484   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:56.128662   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:56.128692   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:47:56.128872   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:47:56.128895   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:56.129061   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:47:56.129118   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa Username:docker}
	I0401 20:47:56.129213   49910 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa Username:docker}
	I0401 20:47:56.249533   49910 ssh_runner.go:195] Run: systemctl --version
	I0401 20:47:56.256172   49910 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:47:56.417778   49910 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 20:47:56.425165   49910 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 20:47:56.425226   49910 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:47:56.443496   49910 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 20:47:56.443519   49910 start.go:495] detecting cgroup driver to use...
	I0401 20:47:56.443593   49910 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:47:56.465677   49910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:47:56.485681   49910 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:47:56.485742   49910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:47:56.502786   49910 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:47:56.517590   49910 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:47:56.641806   49910 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:47:56.822479   49910 docker.go:233] disabling docker service ...
	I0401 20:47:56.822555   49910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:47:56.839796   49910 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:47:56.855016   49910 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:47:57.014051   49910 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:47:57.157500   49910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:47:57.172188   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:47:57.195025   49910 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:47:57.195114   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:47:57.208666   49910 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:47:57.208726   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:47:57.225269   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:47:57.240655   49910 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:47:57.252146   49910 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:47:57.268742   49910 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:47:57.280756   49910 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 20:47:57.280805   49910 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 20:47:57.294560   49910 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:47:57.304630   49910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:47:57.449567   49910 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:47:57.552099   49910 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:47:57.552168   49910 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:47:57.557572   49910 start.go:563] Will wait 60s for crictl version
	I0401 20:47:57.557641   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:47:57.561603   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:47:57.603327   49910 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 20:47:57.603404   49910 ssh_runner.go:195] Run: crio --version
	I0401 20:47:57.636991   49910 ssh_runner.go:195] Run: crio --version
	I0401 20:47:57.671094   49910 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 20:47:57.672648   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetIP
	I0401 20:47:57.678805   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:57.679220   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:47:49 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:47:57.679249   49910 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:47:57.679494   49910 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 20:47:57.684236   49910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:47:57.698483   49910 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-881088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-881088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:47:57.698613   49910 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:47:57.698673   49910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:47:57.744221   49910 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:47:57.744298   49910 ssh_runner.go:195] Run: which lz4
	I0401 20:47:57.748429   49910 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:47:57.752567   49910 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:47:57.752593   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:47:59.637281   49910 crio.go:462] duration metric: took 1.888873971s to copy over tarball
	I0401 20:47:59.637363   49910 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:48:02.389160   49910 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751762734s)
	I0401 20:48:02.389207   49910 crio.go:469] duration metric: took 2.751898811s to extract the tarball
	I0401 20:48:02.389217   49910 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:48:02.433249   49910 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:48:02.490458   49910 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:48:02.490483   49910 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:48:02.490582   49910 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:48:02.490582   49910 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:48:02.490644   49910 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:48:02.490673   49910 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:48:02.490702   49910 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:48:02.490702   49910 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:48:02.490656   49910 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:48:02.490708   49910 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:48:02.492164   49910 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:48:02.492179   49910 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:48:02.492195   49910 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:48:02.492166   49910 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:48:02.492165   49910 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:48:02.492173   49910 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:48:02.492174   49910 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:48:02.492526   49910 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:48:02.649118   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:48:02.649908   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:48:02.655421   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:48:02.667402   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:48:02.670472   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:48:02.673666   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:48:02.719383   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:48:02.747705   49910 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:48:02.747750   49910 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:48:02.747799   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:48:02.795939   49910 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:48:02.795973   49910 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:48:02.796028   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:48:02.835672   49910 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:48:02.835722   49910 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:48:02.835780   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:48:02.853428   49910 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:48:02.853473   49910 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:48:02.853493   49910 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:48:02.853517   49910 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:48:02.853549   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:48:02.853553   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:48:02.853441   49910 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:48:02.853589   49910 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:48:02.853613   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:48:02.853632   49910 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:48:02.853659   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:48:02.853664   49910 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:48:02.853750   49910 ssh_runner.go:195] Run: which crictl
	I0401 20:48:02.853749   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:48:02.853766   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:48:02.872774   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:48:02.950487   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:48:02.950543   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:48:02.950559   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:48:02.950586   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:48:02.950662   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:48:02.950667   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:48:02.954861   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:48:03.101683   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:48:03.101762   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:48:03.102925   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:48:03.102939   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:48:03.102972   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:48:03.103009   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:48:03.103067   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:48:03.253251   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:48:03.253353   49910 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:48:03.263082   49910 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:48:03.263190   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:48:03.263305   49910 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:48:03.263354   49910 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:48:03.263398   49910 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:48:03.331891   49910 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:48:03.331946   49910 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:48:03.336261   49910 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:48:03.853189   49910 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:48:04.000556   49910 cache_images.go:92] duration metric: took 1.510056154s to LoadCachedImages
	W0401 20:48:04.000658   49910 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0401 20:48:04.000675   49910 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.20.0 crio true true} ...
	I0401 20:48:04.000789   49910 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-881088 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-881088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:48:04.000878   49910 ssh_runner.go:195] Run: crio config
	I0401 20:48:04.049968   49910 cni.go:84] Creating CNI manager for ""
	I0401 20:48:04.049994   49910 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:48:04.050012   49910 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:48:04.050037   49910 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-881088 NodeName:kubernetes-upgrade-881088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:48:04.050187   49910 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-881088"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:48:04.050275   49910 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:48:04.061033   49910 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:48:04.061094   49910 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:48:04.071474   49910 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0401 20:48:04.092047   49910 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:48:04.112005   49910 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0401 20:48:04.130149   49910 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0401 20:48:04.134459   49910 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:48:04.148297   49910 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:48:04.266141   49910 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:48:04.284036   49910 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088 for IP: 192.168.39.185
	I0401 20:48:04.284065   49910 certs.go:194] generating shared ca certs ...
	I0401 20:48:04.284084   49910 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:48:04.284274   49910 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 20:48:04.284335   49910 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 20:48:04.284353   49910 certs.go:256] generating profile certs ...
	I0401 20:48:04.284430   49910 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.key
	I0401 20:48:04.284458   49910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.crt with IP's: []
	I0401 20:48:04.313792   49910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.crt ...
	I0401 20:48:04.313819   49910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.crt: {Name:mk7f31305faade3e6024369bba15e572cb138730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:48:04.313982   49910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.key ...
	I0401 20:48:04.313995   49910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.key: {Name:mk161b6ba8a059f33c7351d8c2cf9dd62a21f4b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:48:04.314067   49910 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.key.83a3d514
	I0401 20:48:04.314105   49910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.crt.83a3d514 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
	I0401 20:48:04.368632   49910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.crt.83a3d514 ...
	I0401 20:48:04.368670   49910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.crt.83a3d514: {Name:mkcbd4e761b01d36d1cf834078c2814e30dca669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:48:04.368871   49910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.key.83a3d514 ...
	I0401 20:48:04.368890   49910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.key.83a3d514: {Name:mkdbbccd43979535e40a6b0a6129a57ba3c6fabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:48:04.368991   49910 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.crt.83a3d514 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.crt
	I0401 20:48:04.369101   49910 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.key.83a3d514 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.key
	I0401 20:48:04.369197   49910 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.key
	I0401 20:48:04.369222   49910 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.crt with IP's: []
	I0401 20:48:04.502136   49910 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.crt ...
	I0401 20:48:04.502166   49910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.crt: {Name:mk6c9ae57253391bbeae18f4ae287521dafe60a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:48:04.502356   49910 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.key ...
	I0401 20:48:04.502375   49910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.key: {Name:mk610af6bf92baba03c0e4c04eb56cf0a0a7c3ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:48:04.502546   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 20:48:04.502581   49910 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 20:48:04.502591   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:48:04.502615   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:48:04.502638   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:48:04.502659   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 20:48:04.502694   49910 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:48:04.503268   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:48:04.529587   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 20:48:04.554617   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:48:04.580579   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:48:04.605837   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0401 20:48:04.631055   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:48:04.656501   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:48:04.681043   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:48:04.705974   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 20:48:04.732069   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:48:04.760325   49910 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 20:48:04.789612   49910 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:48:04.808128   49910 ssh_runner.go:195] Run: openssl version
	I0401 20:48:04.814741   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 20:48:04.827321   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 20:48:04.832460   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 20:48:04.832541   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 20:48:04.839125   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:48:04.852769   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:48:04.866774   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:48:04.872096   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:48:04.872184   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:48:04.878427   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:48:04.891789   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 20:48:04.904814   49910 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 20:48:04.910091   49910 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 20:48:04.910171   49910 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 20:48:04.916241   49910 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 20:48:04.929139   49910 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:48:04.933785   49910 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:48:04.933844   49910 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-881088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-881088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:48:04.933929   49910 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:48:04.933978   49910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:48:04.981656   49910 cri.go:89] found id: ""
	I0401 20:48:04.981749   49910 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:48:04.994235   49910 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:48:05.006832   49910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:48:05.019422   49910 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:48:05.019448   49910 kubeadm.go:157] found existing configuration files:
	
	I0401 20:48:05.019489   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:48:05.030276   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:48:05.030336   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:48:05.041737   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:48:05.051738   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:48:05.051794   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:48:05.062268   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:48:05.072570   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:48:05.072625   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:48:05.085017   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:48:05.102183   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:48:05.102277   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:48:05.123245   49910 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 20:48:05.396442   49910 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:50:03.104372   49910 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 20:50:03.104476   49910 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 20:50:03.106745   49910 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:50:03.106812   49910 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:50:03.106910   49910 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:50:03.107007   49910 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:50:03.107098   49910 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:50:03.107157   49910 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:50:03.108907   49910 out.go:235]   - Generating certificates and keys ...
	I0401 20:50:03.109002   49910 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:50:03.109103   49910 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:50:03.109209   49910 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:50:03.109285   49910 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:50:03.109364   49910 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:50:03.109432   49910 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:50:03.109503   49910 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:50:03.109670   49910 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-881088 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0401 20:50:03.109745   49910 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:50:03.109919   49910 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-881088 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0401 20:50:03.110006   49910 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:50:03.110089   49910 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:50:03.110146   49910 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:50:03.110245   49910 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:50:03.110317   49910 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:50:03.110389   49910 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:50:03.110475   49910 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:50:03.110549   49910 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:50:03.110689   49910 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:50:03.110802   49910 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:50:03.110866   49910 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:50:03.110945   49910 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:50:03.112707   49910 out.go:235]   - Booting up control plane ...
	I0401 20:50:03.112833   49910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:50:03.112943   49910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:50:03.113087   49910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:50:03.113193   49910 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:50:03.113427   49910 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:50:03.113506   49910 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 20:50:03.113622   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:03.113898   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:03.114021   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:03.114323   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:03.114421   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:03.114673   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:03.114773   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:03.115026   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:03.115133   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:03.115364   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:03.115374   49910 kubeadm.go:310] 
	I0401 20:50:03.115423   49910 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 20:50:03.115476   49910 kubeadm.go:310] 		timed out waiting for the condition
	I0401 20:50:03.115486   49910 kubeadm.go:310] 
	I0401 20:50:03.115531   49910 kubeadm.go:310] 	This error is likely caused by:
	I0401 20:50:03.115584   49910 kubeadm.go:310] 		- The kubelet is not running
	I0401 20:50:03.115731   49910 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 20:50:03.115758   49910 kubeadm.go:310] 
	I0401 20:50:03.115894   49910 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 20:50:03.115940   49910 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 20:50:03.115982   49910 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 20:50:03.115991   49910 kubeadm.go:310] 
	I0401 20:50:03.116129   49910 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 20:50:03.116245   49910 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 20:50:03.116255   49910 kubeadm.go:310] 
	I0401 20:50:03.116391   49910 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 20:50:03.116542   49910 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 20:50:03.116671   49910 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 20:50:03.116773   49910 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0401 20:50:03.116916   49910 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-881088 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-881088 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-881088 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-881088 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 20:50:03.116995   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 20:50:03.117292   49910 kubeadm.go:310] 
	I0401 20:50:05.491089   49910 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.374062736s)
	I0401 20:50:05.491177   49910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:50:05.507163   49910 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:50:05.518759   49910 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:50:05.518785   49910 kubeadm.go:157] found existing configuration files:
	
	I0401 20:50:05.518839   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:50:05.528994   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:50:05.529079   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:50:05.539835   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:50:05.549648   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:50:05.549747   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:50:05.560557   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:50:05.573262   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:50:05.573344   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:50:05.587801   49910 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:50:05.601573   49910 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:50:05.601665   49910 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:50:05.613385   49910 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 20:50:05.695720   49910 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:50:05.695818   49910 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:50:05.861252   49910 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:50:05.861406   49910 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:50:05.861551   49910 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:50:06.069077   49910 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:50:06.071173   49910 out.go:235]   - Generating certificates and keys ...
	I0401 20:50:06.071284   49910 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:50:06.071371   49910 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:50:06.071509   49910 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 20:50:06.071614   49910 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0401 20:50:06.071735   49910 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 20:50:06.071826   49910 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0401 20:50:06.071942   49910 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0401 20:50:06.072152   49910 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0401 20:50:06.072648   49910 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 20:50:06.073059   49910 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 20:50:06.073290   49910 kubeadm.go:310] [certs] Using the existing "sa" key
	I0401 20:50:06.073372   49910 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:50:06.165951   49910 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:50:06.363856   49910 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:50:06.458537   49910 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:50:06.670328   49910 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:50:06.687536   49910 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:50:06.691692   49910 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:50:06.691769   49910 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:50:06.847571   49910 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:50:06.849280   49910 out.go:235]   - Booting up control plane ...
	I0401 20:50:06.849385   49910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:50:06.855659   49910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:50:06.856784   49910 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:50:06.857684   49910 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:50:06.859839   49910 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:50:46.861892   49910 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 20:50:46.862174   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:46.862435   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:51.862744   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:51.862964   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:51:01.863671   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:51:01.863952   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:51:21.863426   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:51:21.863653   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:52:01.863562   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:52:01.864042   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:52:01.864088   49910 kubeadm.go:310] 
	I0401 20:52:01.864176   49910 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 20:52:01.864266   49910 kubeadm.go:310] 		timed out waiting for the condition
	I0401 20:52:01.864284   49910 kubeadm.go:310] 
	I0401 20:52:01.864357   49910 kubeadm.go:310] 	This error is likely caused by:
	I0401 20:52:01.864423   49910 kubeadm.go:310] 		- The kubelet is not running
	I0401 20:52:01.864542   49910 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 20:52:01.864551   49910 kubeadm.go:310] 
	I0401 20:52:01.864668   49910 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 20:52:01.864710   49910 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 20:52:01.864749   49910 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 20:52:01.864758   49910 kubeadm.go:310] 
	I0401 20:52:01.864882   49910 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 20:52:01.864984   49910 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 20:52:01.864994   49910 kubeadm.go:310] 
	I0401 20:52:01.865134   49910 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 20:52:01.865273   49910 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 20:52:01.865376   49910 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 20:52:01.865471   49910 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 20:52:01.865482   49910 kubeadm.go:310] 
	I0401 20:52:01.866172   49910 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:52:01.866317   49910 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 20:52:01.866405   49910 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 20:52:01.866475   49910 kubeadm.go:394] duration metric: took 3m56.932633017s to StartCluster
	I0401 20:52:01.866510   49910 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 20:52:01.866571   49910 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 20:52:01.914202   49910 cri.go:89] found id: ""
	I0401 20:52:01.914255   49910 logs.go:282] 0 containers: []
	W0401 20:52:01.914266   49910 logs.go:284] No container was found matching "kube-apiserver"
	I0401 20:52:01.914276   49910 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 20:52:01.914338   49910 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 20:52:01.953898   49910 cri.go:89] found id: ""
	I0401 20:52:01.953927   49910 logs.go:282] 0 containers: []
	W0401 20:52:01.953934   49910 logs.go:284] No container was found matching "etcd"
	I0401 20:52:01.953939   49910 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 20:52:01.953986   49910 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 20:52:01.990788   49910 cri.go:89] found id: ""
	I0401 20:52:01.990833   49910 logs.go:282] 0 containers: []
	W0401 20:52:01.990846   49910 logs.go:284] No container was found matching "coredns"
	I0401 20:52:01.990866   49910 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 20:52:01.990939   49910 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 20:52:02.029740   49910 cri.go:89] found id: ""
	I0401 20:52:02.029769   49910 logs.go:282] 0 containers: []
	W0401 20:52:02.029780   49910 logs.go:284] No container was found matching "kube-scheduler"
	I0401 20:52:02.029787   49910 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 20:52:02.029850   49910 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 20:52:02.075498   49910 cri.go:89] found id: ""
	I0401 20:52:02.075529   49910 logs.go:282] 0 containers: []
	W0401 20:52:02.075539   49910 logs.go:284] No container was found matching "kube-proxy"
	I0401 20:52:02.075547   49910 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 20:52:02.075612   49910 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 20:52:02.120073   49910 cri.go:89] found id: ""
	I0401 20:52:02.120097   49910 logs.go:282] 0 containers: []
	W0401 20:52:02.120106   49910 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 20:52:02.120113   49910 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 20:52:02.120187   49910 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 20:52:02.162706   49910 cri.go:89] found id: ""
	I0401 20:52:02.162733   49910 logs.go:282] 0 containers: []
	W0401 20:52:02.162743   49910 logs.go:284] No container was found matching "kindnet"
	I0401 20:52:02.162755   49910 logs.go:123] Gathering logs for container status ...
	I0401 20:52:02.162769   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 20:52:02.204915   49910 logs.go:123] Gathering logs for kubelet ...
	I0401 20:52:02.204945   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 20:52:02.256749   49910 logs.go:123] Gathering logs for dmesg ...
	I0401 20:52:02.256785   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 20:52:02.272054   49910 logs.go:123] Gathering logs for describe nodes ...
	I0401 20:52:02.272085   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 20:52:02.417144   49910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 20:52:02.417170   49910 logs.go:123] Gathering logs for CRI-O ...
	I0401 20:52:02.417185   49910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0401 20:52:02.530690   49910 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 20:52:02.530770   49910 out.go:270] * 
	* 
	W0401 20:52:02.530838   49910 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 20:52:02.530861   49910 out.go:270] * 
	* 
	W0401 20:52:02.531733   49910 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:52:02.534785   49910 out.go:201] 
	W0401 20:52:02.536311   49910 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 20:52:02.536351   49910 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 20:52:02.536369   49910 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 20:52:02.537862   49910 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-881088
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-881088: (2.301156252s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-881088 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-881088 status --format={{.Host}}: exit status 7 (75.935228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0401 20:52:10.867647   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:52:27.799484   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.753844102s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-881088 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.290255ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-881088] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-881088
	    minikube start -p kubernetes-upgrade-881088 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8810882 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-881088 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-881088 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.013422255s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-01 20:54:06.872479632 +0000 UTC m=+4147.172228320
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-881088 -n kubernetes-upgrade-881088
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-881088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-881088 logs -n 25: (1.645381692s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-269490 sudo                 | cilium-269490             | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-269490 sudo                 | cilium-269490             | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-269490 sudo find            | cilium-269490             | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-269490 sudo crio            | cilium-269490             | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-269490                      | cilium-269490             | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC | 01 Apr 25 20:51 UTC |
	| delete  | -p running-upgrade-877059             | running-upgrade-877059    | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC | 01 Apr 25 20:51 UTC |
	| start   | -p cert-expiration-808084             | cert-expiration-808084    | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC | 01 Apr 25 20:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-321311             | minikube                  | jenkins | v1.26.0 | 01 Apr 25 20:51 UTC | 01 Apr 25 20:52 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-818542           | force-systemd-env-818542  | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC | 01 Apr 25 20:51 UTC |
	| start   | -p force-systemd-flag-846715          | force-systemd-flag-846715 | jenkins | v1.35.0 | 01 Apr 25 20:51 UTC | 01 Apr 25 20:52 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-881088          | kubernetes-upgrade-881088 | jenkins | v1.35.0 | 01 Apr 25 20:52 UTC | 01 Apr 25 20:52 UTC |
	| start   | -p kubernetes-upgrade-881088          | kubernetes-upgrade-881088 | jenkins | v1.35.0 | 01 Apr 25 20:52 UTC | 01 Apr 25 20:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-321311 stop           | minikube                  | jenkins | v1.26.0 | 01 Apr 25 20:52 UTC | 01 Apr 25 20:52 UTC |
	| start   | -p stopped-upgrade-321311             | stopped-upgrade-321311    | jenkins | v1.35.0 | 01 Apr 25 20:52 UTC | 01 Apr 25 20:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-846715 ssh cat     | force-systemd-flag-846715 | jenkins | v1.35.0 | 01 Apr 25 20:52 UTC | 01 Apr 25 20:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-846715          | force-systemd-flag-846715 | jenkins | v1.35.0 | 01 Apr 25 20:52 UTC | 01 Apr 25 20:52 UTC |
	| start   | -p cert-options-454573                | cert-options-454573       | jenkins | v1.35.0 | 01 Apr 25 20:52 UTC | 01 Apr 25 20:54 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-881088          | kubernetes-upgrade-881088 | jenkins | v1.35.0 | 01 Apr 25 20:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-881088          | kubernetes-upgrade-881088 | jenkins | v1.35.0 | 01 Apr 25 20:53 UTC | 01 Apr 25 20:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-321311             | stopped-upgrade-321311    | jenkins | v1.35.0 | 01 Apr 25 20:53 UTC | 01 Apr 25 20:53 UTC |
	| start   | -p old-k8s-version-582207             | old-k8s-version-582207    | jenkins | v1.35.0 | 01 Apr 25 20:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | cert-options-454573 ssh               | cert-options-454573       | jenkins | v1.35.0 | 01 Apr 25 20:54 UTC | 01 Apr 25 20:54 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-454573 -- sudo        | cert-options-454573       | jenkins | v1.35.0 | 01 Apr 25 20:54 UTC | 01 Apr 25 20:54 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-454573                | cert-options-454573       | jenkins | v1.35.0 | 01 Apr 25 20:54 UTC | 01 Apr 25 20:54 UTC |
	| start   | -p no-preload-881142                  | no-preload-881142         | jenkins | v1.35.0 | 01 Apr 25 20:54 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:54:03
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:54:03.878592   57915 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:54:03.878939   57915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:54:03.878959   57915 out.go:358] Setting ErrFile to fd 2...
	I0401 20:54:03.878966   57915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:54:03.879285   57915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:54:03.880069   57915 out.go:352] Setting JSON to false
	I0401 20:54:03.881350   57915 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5788,"bootTime":1743535056,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:54:03.881435   57915 start.go:139] virtualization: kvm guest
	I0401 20:54:03.884432   57915 out.go:177] * [no-preload-881142] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:54:03.886136   57915 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:54:03.886154   57915 notify.go:220] Checking for updates...
	I0401 20:54:03.889275   57915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:54:03.890840   57915 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:54:03.892170   57915 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:54:03.893571   57915 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:54:03.894826   57915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:54:03.896673   57915 config.go:182] Loaded profile config "cert-expiration-808084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:54:03.896820   57915 config.go:182] Loaded profile config "kubernetes-upgrade-881088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:54:03.896955   57915 config.go:182] Loaded profile config "old-k8s-version-582207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:54:03.897080   57915 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:54:03.939960   57915 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 20:54:03.941324   57915 start.go:297] selected driver: kvm2
	I0401 20:54:03.941339   57915 start.go:901] validating driver "kvm2" against <nil>
	I0401 20:54:03.941354   57915 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:54:03.942561   57915 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:54:03.942676   57915 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 20:54:03.959324   57915 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 20:54:03.959379   57915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:54:03.959612   57915 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:54:03.959646   57915 cni.go:84] Creating CNI manager for ""
	I0401 20:54:03.959687   57915 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:54:03.959698   57915 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 20:54:03.959743   57915 start.go:340] cluster config:
	{Name:no-preload-881142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-881142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:54:03.959832   57915 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:54:03.961735   57915 out.go:177] * Starting "no-preload-881142" primary control-plane node in "no-preload-881142" cluster
	I0401 20:54:02.621837   57076 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 20:54:02.621868   57076 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 20:54:02.621885   57076 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0401 20:54:02.640200   57076 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 20:54:02.640235   57076 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 20:54:03.036187   57076 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0401 20:54:03.044350   57076 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:54:03.044378   57076 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:54:03.536027   57076 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0401 20:54:03.550894   57076 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:54:03.550938   57076 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:54:04.036088   57076 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0401 20:54:04.055121   57076 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:54:04.055159   57076 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:54:04.536468   57076 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0401 20:54:04.546370   57076 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0401 20:54:04.562227   57076 api_server.go:141] control plane version: v1.32.2
	I0401 20:54:04.562258   57076 api_server.go:131] duration metric: took 5.026295461s to wait for apiserver health ...
	I0401 20:54:04.562270   57076 cni.go:84] Creating CNI manager for ""
	I0401 20:54:04.562278   57076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:54:04.564202   57076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 20:54:04.565400   57076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 20:54:04.612990   57076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0401 20:54:04.704711   57076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 20:54:04.733308   57076 system_pods.go:59] 8 kube-system pods found
	I0401 20:54:04.733361   57076 system_pods.go:61] "coredns-668d6bf9bc-g2cjk" [0d4b1e78-8165-4c5e-b788-3bd135190ee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 20:54:04.733374   57076 system_pods.go:61] "coredns-668d6bf9bc-gxwm9" [96853fed-fcad-4f9f-abe6-16990348547f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 20:54:04.733385   57076 system_pods.go:61] "etcd-kubernetes-upgrade-881088" [cbeae0d9-4cb3-4b6a-9bfa-e1ab5e7b7d24] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 20:54:04.733398   57076 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-881088" [38542bd0-f3eb-4458-a21c-b4bf2036eff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 20:54:04.733408   57076 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-881088" [2c51731f-0384-4401-872f-68aa8f5198f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 20:54:04.733423   57076 system_pods.go:61] "kube-proxy-x9rmw" [f23aab68-46c8-43ff-9079-b575a515ed5f] Running
	I0401 20:54:04.733432   57076 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-881088" [4106f85b-949d-484d-891b-7aeeaf53a4cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 20:54:04.733437   57076 system_pods.go:61] "storage-provisioner" [25488b3e-ec76-44f9-ace3-19cc0ebc2c39] Running
	I0401 20:54:04.733446   57076 system_pods.go:74] duration metric: took 28.710776ms to wait for pod list to return data ...
	I0401 20:54:04.733455   57076 node_conditions.go:102] verifying NodePressure condition ...
	I0401 20:54:04.748356   57076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 20:54:04.748387   57076 node_conditions.go:123] node cpu capacity is 2
	I0401 20:54:04.748397   57076 node_conditions.go:105] duration metric: took 14.935403ms to run NodePressure ...
	I0401 20:54:04.748417   57076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 20:54:05.335302   57076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:54:05.349100   57076 ops.go:34] apiserver oom_adj: -16
	I0401 20:54:05.349125   57076 kubeadm.go:597] duration metric: took 8.795491235s to restartPrimaryControlPlane
	I0401 20:54:05.349135   57076 kubeadm.go:394] duration metric: took 8.885874755s to StartCluster
	I0401 20:54:05.349155   57076 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:05.349237   57076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:54:05.350463   57076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:05.350760   57076 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:54:05.350839   57076 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:54:05.350937   57076 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-881088"
	I0401 20:54:05.350958   57076 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-881088"
	W0401 20:54:05.350970   57076 addons.go:247] addon storage-provisioner should already be in state true
	I0401 20:54:05.350974   57076 config.go:182] Loaded profile config "kubernetes-upgrade-881088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:54:05.351003   57076 host.go:66] Checking if "kubernetes-upgrade-881088" exists ...
	I0401 20:54:05.351048   57076 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-881088"
	I0401 20:54:05.351068   57076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-881088"
	I0401 20:54:05.351380   57076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:54:05.351418   57076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:54:05.351558   57076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:54:05.351590   57076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:54:05.352667   57076 out.go:177] * Verifying Kubernetes components...
	I0401 20:54:05.354276   57076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:54:05.371232   57076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0401 20:54:05.371834   57076 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:54:05.372321   57076 main.go:141] libmachine: Using API Version  1
	I0401 20:54:05.372347   57076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:54:05.372724   57076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0401 20:54:05.372883   57076 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:54:05.373058   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetState
	I0401 20:54:05.373336   57076 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:54:05.373929   57076 main.go:141] libmachine: Using API Version  1
	I0401 20:54:05.373946   57076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:54:05.374339   57076 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:54:05.374898   57076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:54:05.374948   57076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:54:05.376695   57076 kapi.go:59] client config for kubernetes-upgrade-881088: &rest.Config{Host:"https://192.168.39.185:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.crt", KeyFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kubernetes-upgrade-881088/client.key", CAFile:"/home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 20:54:05.377030   57076 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-881088"
	W0401 20:54:05.377053   57076 addons.go:247] addon default-storageclass should already be in state true
	I0401 20:54:05.377082   57076 host.go:66] Checking if "kubernetes-upgrade-881088" exists ...
	I0401 20:54:05.377439   57076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:54:05.377480   57076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:54:05.393851   57076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0401 20:54:05.394475   57076 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:54:05.394801   57076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I0401 20:54:05.395143   57076 main.go:141] libmachine: Using API Version  1
	I0401 20:54:05.395157   57076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:54:05.395514   57076 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:54:05.396088   57076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:54:05.396127   57076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:54:05.396322   57076 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:54:05.396833   57076 main.go:141] libmachine: Using API Version  1
	I0401 20:54:05.396845   57076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:54:05.397226   57076 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:54:05.397398   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetState
	I0401 20:54:05.399644   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:54:05.401765   57076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:54:05.402970   57076 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:54:05.402982   57076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:54:05.402996   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:54:05.406208   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:54:05.406698   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:52:47 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:54:05.406718   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:54:05.406939   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:54:05.407114   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:54:05.407215   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:54:05.407313   57076 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa Username:docker}
	I0401 20:54:05.431225   57076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0401 20:54:05.431641   57076 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:54:05.432158   57076 main.go:141] libmachine: Using API Version  1
	I0401 20:54:05.432179   57076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:54:05.432712   57076 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:54:05.432884   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetState
	I0401 20:54:05.435204   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .DriverName
	I0401 20:54:05.435944   57076 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:54:05.435959   57076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:54:05.435977   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHHostname
	I0401 20:54:05.438986   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:54:05.439490   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:3e:fe", ip: ""} in network mk-kubernetes-upgrade-881088: {Iface:virbr1 ExpiryTime:2025-04-01 21:52:47 +0000 UTC Type:0 Mac:52:54:00:c1:3e:fe Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:kubernetes-upgrade-881088 Clientid:01:52:54:00:c1:3e:fe}
	I0401 20:54:05.439524   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | domain kubernetes-upgrade-881088 has defined IP address 192.168.39.185 and MAC address 52:54:00:c1:3e:fe in network mk-kubernetes-upgrade-881088
	I0401 20:54:05.439690   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHPort
	I0401 20:54:05.439837   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHKeyPath
	I0401 20:54:05.439965   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .GetSSHUsername
	I0401 20:54:05.440048   57076 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kubernetes-upgrade-881088/id_rsa Username:docker}
	I0401 20:54:05.635920   57076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:54:05.669903   57076 api_server.go:52] waiting for apiserver process to appear ...
	I0401 20:54:05.669988   57076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:54:05.704623   57076 api_server.go:72] duration metric: took 353.821687ms to wait for apiserver process to appear ...
	I0401 20:54:05.704655   57076 api_server.go:88] waiting for apiserver healthz status ...
	I0401 20:54:05.704676   57076 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0401 20:54:05.723845   57076 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0401 20:54:05.725735   57076 api_server.go:141] control plane version: v1.32.2
	I0401 20:54:05.725761   57076 api_server.go:131] duration metric: took 21.098578ms to wait for apiserver health ...
	I0401 20:54:05.725772   57076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 20:54:05.732786   57076 system_pods.go:59] 8 kube-system pods found
	I0401 20:54:05.732824   57076 system_pods.go:61] "coredns-668d6bf9bc-g2cjk" [0d4b1e78-8165-4c5e-b788-3bd135190ee4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 20:54:05.732841   57076 system_pods.go:61] "coredns-668d6bf9bc-gxwm9" [96853fed-fcad-4f9f-abe6-16990348547f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 20:54:05.732856   57076 system_pods.go:61] "etcd-kubernetes-upgrade-881088" [cbeae0d9-4cb3-4b6a-9bfa-e1ab5e7b7d24] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 20:54:05.732871   57076 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-881088" [38542bd0-f3eb-4458-a21c-b4bf2036eff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 20:54:05.732881   57076 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-881088" [2c51731f-0384-4401-872f-68aa8f5198f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 20:54:05.732890   57076 system_pods.go:61] "kube-proxy-x9rmw" [f23aab68-46c8-43ff-9079-b575a515ed5f] Running
	I0401 20:54:05.732899   57076 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-881088" [4106f85b-949d-484d-891b-7aeeaf53a4cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 20:54:05.732904   57076 system_pods.go:61] "storage-provisioner" [25488b3e-ec76-44f9-ace3-19cc0ebc2c39] Running
	I0401 20:54:05.732913   57076 system_pods.go:74] duration metric: took 7.13418ms to wait for pod list to return data ...
	I0401 20:54:05.732925   57076 kubeadm.go:582] duration metric: took 382.129228ms to wait for: map[apiserver:true system_pods:true]
	I0401 20:54:05.732939   57076 node_conditions.go:102] verifying NodePressure condition ...
	I0401 20:54:05.746985   57076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 20:54:05.747011   57076 node_conditions.go:123] node cpu capacity is 2
	I0401 20:54:05.747023   57076 node_conditions.go:105] duration metric: took 14.078703ms to run NodePressure ...
	I0401 20:54:05.747037   57076 start.go:241] waiting for startup goroutines ...
	I0401 20:54:05.828839   57076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:54:05.856895   57076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:54:06.798423   57076 main.go:141] libmachine: Making call to close driver server
	I0401 20:54:06.798452   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .Close
	I0401 20:54:06.798499   57076 main.go:141] libmachine: Making call to close driver server
	I0401 20:54:06.798540   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .Close
	I0401 20:54:06.798810   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Closing plugin on server side
	I0401 20:54:06.798838   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Closing plugin on server side
	I0401 20:54:06.798870   57076 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:54:06.798883   57076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:54:06.798868   57076 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:54:06.798919   57076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:54:06.798932   57076 main.go:141] libmachine: Making call to close driver server
	I0401 20:54:06.798940   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .Close
	I0401 20:54:06.798897   57076 main.go:141] libmachine: Making call to close driver server
	I0401 20:54:06.798991   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .Close
	I0401 20:54:06.799316   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Closing plugin on server side
	I0401 20:54:06.799352   57076 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:54:06.799350   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Closing plugin on server side
	I0401 20:54:06.799358   57076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:54:06.799369   57076 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:54:06.799376   57076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:54:06.805310   57076 main.go:141] libmachine: Making call to close driver server
	I0401 20:54:06.805332   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) Calling .Close
	I0401 20:54:06.805671   57076 main.go:141] libmachine: (kubernetes-upgrade-881088) DBG | Closing plugin on server side
	I0401 20:54:06.805738   57076 main.go:141] libmachine: Successfully made call to close driver server
	I0401 20:54:06.805781   57076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 20:54:06.808273   57076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:54:06.809306   57076 addons.go:514] duration metric: took 1.458478469s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:54:06.809346   57076 start.go:246] waiting for cluster config update ...
	I0401 20:54:06.809356   57076 start.go:255] writing updated cluster config ...
	I0401 20:54:06.809581   57076 ssh_runner.go:195] Run: rm -f paused
	I0401 20:54:06.855828   57076 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 20:54:06.857651   57076 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-881088" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.619772350Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80d6cf17-38e5-4d30-8fde-371d96f8d33d name=/runtime.v1.RuntimeService/Version
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.622518209Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=477ec790-95a5-4e93-9781-9fd5c09d555b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.623063394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540847623031207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=477ec790-95a5-4e93-9781-9fd5c09d555b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.623770969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1ee6900-c214-492a-bd4c-7fba40cf107f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.623846712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1ee6900-c214-492a-bd4c-7fba40cf107f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.624164927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a551bc47a6ca1cd786cc36350f71b5325c6a3c5d7f5ef1a27413c2d56c786f7,PodSandboxId:9a1b2f5fdd5af43b9d38a28fdbf23388272e33f7e2f9f5befc0548ef2d38fd43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844980951992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6610cf254a21e549afff8a9618f8e4bced4bd2b234551760a45c7cc6a21eac,PodSandboxId:ff96186917912e32f70f9926717e83d0e10bfb655afd2696f39f078b4d8093ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844891604542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64babb354f5985db1bd0d45fdbe1006f54e692d53568fd3680231b8d9ede29ef,PodSandboxId:25c8b1d7ef76d0727efa55608ead461cbd6ec7f0f1b3718439af5a27a4d0eb43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1743540844234302164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7477a0ffb744065ef7a50cfeac5a9114c8c4099360c98702671564c89e7c3d68,PodSandboxId:1e96463633daf04e14c8284e1b45149e57725a48c61a90df516c21b523ca2d12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,C
reatedAt:1743540844153523979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32156bc6e4292307e44c3d7b1a8af9c594ff42cb249a8d981464c45a34316cf9,PodSandboxId:b1dc8f242841c090808fa3810fe7353167a688f9673e2838f88af3a6e0388f27,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540839183515796,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db7b531060963d97889adad33f6914853526edeeb5e6e44ea25c2aa6ad9ef32,PodSandboxId:b48c09e5c4927949333629df1e1bf8fa4ba8e0ee61f59050e0f3c015bd2e50e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:17435408391823509
29,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe9dca575efbe13de4f132ea075dc5085700ff0fdb61cda7f60110f8f60d94f,PodSandboxId:e531a13f37dbd550f4e6963f5f5172d927a0fbcb6fe84e95744a39db109af898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743
540839174885465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3a5b98c722d83a3bdd01892ac3875642566835bfd8d198817584b871697e37,PodSandboxId:c4856e4990d920f791b0f6b0e732981c99fd38272c0caecb1785fa6a4a7238ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540839159523357
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bf471ae051472ea4cb18ffa2c909b430d6a37e0c80ddeab71091b0c61cdeb6,PodSandboxId:4910262096b531b09255d5ff345ad633952ef1e56d51def78dc387ed3e4d4332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796374116413,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65c264668fab2c4ebeea0f9706c616b7bc953ffdc13e7a4ec1c83adb3d26967,PodSandboxId:0c595c08681abf8bfc7deda6df45e8b04d070c0b487511832fb2aab92ff5ceb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796324600820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cb8734ab1b431e0da1b1009243932ebf8cf9f96416cef05b2caba93466c63d,PodSandboxId:734efa381feb145c84deb709c342a584a93d2b3fb77513e
71469f11573a24608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540795728355676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3209b3fc86d7b9ed29edbf3ab37430c570b34677e1d44809ece7a6c193de9d55,PodSandboxId:428dd53425aa753b4eeb97730249d3b64cf63829104109dce74fd5b250cb13c4,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743540795670020298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22520a0600eb99b385216db5a42723ef7edb83242d52c9962964b2e2acc5e630,PodSandboxId:b71fda6154e6db79badad95109e56f98026711733f2365aa96bae7cba53f0749,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540785508102512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead3572ff8efba912c679737c17ab5dc3e118116c7ab7ce1da4072731fe65cf,PodSandboxId:096a2367540b013904fbc7a204a3f9726f8cdb5f342a052ca340c42dafeea59d,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540785513733647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c8bc30eab3bd672a27e572014ef5a4c2c205a457f782753bf8effc5e4ea2e3,PodSandboxId:32ca8ac9589d9d807d782f83efc44acb3b27391471b8ade16cbaacead3b70027,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540785447330057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4ae4e0c5691b287c9ad962471296a0596d64b67f26a7abd306524b4e57a6b5,PodSandboxId:e0f3fed915bdd4874233038048aa859e02153925fe986676af2279283227c7fa,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540785414283687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1ee6900-c214-492a-bd4c-7fba40cf107f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.670889920Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11a01b51-7944-4c63-81bd-83c5a1bd41e8 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.670961429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11a01b51-7944-4c63-81bd-83c5a1bd41e8 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.673115935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60540709-ffef-403d-b142-9d08f6dd2932 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.673717262Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=aaaf9596-561a-4b09-8342-19cc6a9127ec name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.674041117Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ff96186917912e32f70f9926717e83d0e10bfb655afd2696f39f078b4d8093ea,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-g2cjk,Uid:0d4b1e78-8165-4c5e-b788-3bd135190ee4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1743540844067368976,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-01T20:54:03.444890822Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a1b2f5fdd5af43b9d38a28fdbf23388272e33f7e2f9f5befc0548ef2d38fd43,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-gxwm9,Uid:96853fed-fcad-4f9f-abe6-16990348547f,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1743540844065536210,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-01T20:54:03.444893110Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:25c8b1d7ef76d0727efa55608ead461cbd6ec7f0f1b3718439af5a27a4d0eb43,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:25488b3e-ec76-44f9-ace3-19cc0ebc2c39,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1743540843816159525,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},An
notations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-04-01T20:54:03.444886991Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e96463633daf04e14c8284e1b45149e57725a48c61a90df516c21b523ca2d12,Metadata:&PodSandboxMetadata{Name:kube-proxy-x9rmw,Uid:f23aab68-46c8-43ff-9079-b575a515ed5f,N
amespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1743540843812884336,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-01T20:54:03.444901817Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b48c09e5c4927949333629df1e1bf8fa4ba8e0ee61f59050e0f3c015bd2e50e8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-881088,Uid:7cd30459878bb2568b459adb0f0da4e4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1743540838939498234,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 7cd30459878bb2568b459adb0f0da4e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7cd30459878bb2568b459adb0f0da4e4,kubernetes.io/config.seen: 2025-04-01T20:53:58.430714206Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b1dc8f242841c090808fa3810fe7353167a688f9673e2838f88af3a6e0388f27,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-881088,Uid:86c7c8a46bb95981fd46cc2df440ce1d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1743540838937499225,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 86c7c8a46bb95981fd46cc2df440ce1d,kubernetes.io/config.seen: 2025-04-01T20:53:58.430706669Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e
531a13f37dbd550f4e6963f5f5172d927a0fbcb6fe84e95744a39db109af898,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-881088,Uid:3ca4e5e1c769a1263c57d047c2faa1e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1743540838934704589,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 3ca4e5e1c769a1263c57d047c2faa1e9,kubernetes.io/config.seen: 2025-04-01T20:53:58.430712660Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4856e4990d920f791b0f6b0e732981c99fd38272c0caecb1785fa6a4a7238ec,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-881088,Uid:02b8ef573c808d247a73d06c6235286b,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1743540838930924383,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kubernetes.io/config.hash: 02b8ef573c808d247a73d06c6235286b,kubernetes.io/config.seen: 2025-04-01T20:53:58.430711372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0c595c08681abf8bfc7deda6df45e8b04d070c0b487511832fb2aab92ff5ceb6,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-gxwm9,Uid:96853fed-fcad-4f9f-abe6-16990348547f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540795775711599,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 96853fed-fcad-4f9f-abe6-16990348547f,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-01T20:53:15.466438016Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4910262096b531b09255d5ff345ad633952ef1e56d51def78dc387ed3e4d4332,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-g2cjk,Uid:0d4b1e78-8165-4c5e-b788-3bd135190ee4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540795758903460,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-01T20:53:15.450426852Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:428dd53425aa753b4eeb97730249d3b64cf63829104109dce74fd5b250cb13c4,Metadata:&PodSandboxMetadata{Name:storage-provi
sioner,Uid:25488b3e-ec76-44f9-ace3-19cc0ebc2c39,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540795494145027,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,
\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-04-01T20:53:14.580773884Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:734efa381feb145c84deb709c342a584a93d2b3fb77513e71469f11573a24608,Metadata:&PodSandboxMetadata{Name:kube-proxy-x9rmw,Uid:f23aab68-46c8-43ff-9079-b575a515ed5f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540795478823036,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-04-01T20:53:15.162652206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b71fda6154e6db79badad95109e56f98026711733f2365aa96bae7cba53f0749,Metada
ta:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-881088,Uid:3ca4e5e1c769a1263c57d047c2faa1e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540785252699047,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 3ca4e5e1c769a1263c57d047c2faa1e9,kubernetes.io/config.seen: 2025-04-01T20:53:04.141199961Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:096a2367540b013904fbc7a204a3f9726f8cdb5f342a052ca340c42dafeea59d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-881088,Uid:86c7c8a46bb95981fd46cc2df440ce1d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540785248272
031,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 86c7c8a46bb95981fd46cc2df440ce1d,kubernetes.io/config.seen: 2025-04-01T20:53:04.141202836Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32ca8ac9589d9d807d782f83efc44acb3b27391471b8ade16cbaacead3b70027,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-881088,Uid:7cd30459878bb2568b459adb0f0da4e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540785230594869,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da
4e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7cd30459878bb2568b459adb0f0da4e4,kubernetes.io/config.seen: 2025-04-01T20:53:04.141201430Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e0f3fed915bdd4874233038048aa859e02153925fe986676af2279283227c7fa,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-881088,Uid:02b8ef573c808d247a73d06c6235286b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1743540785227464936,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kubernetes.io/config.hash: 02b8ef573c808d247a73d06c6235286b,kubernetes.io/config.seen: 2025-04-01T20:53:04.141188860Z,kubernetes.io/config.source: file,},RuntimeHandler:,
},},}" file="otel-collector/interceptors.go:74" id=aaaf9596-561a-4b09-8342-19cc6a9127ec name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.674722285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540847674700585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60540709-ffef-403d-b142-9d08f6dd2932 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.675131713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35a19345-2053-4ea5-88c0-81cf51e66fd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.675212776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35a19345-2053-4ea5-88c0-81cf51e66fd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.675787862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a551bc47a6ca1cd786cc36350f71b5325c6a3c5d7f5ef1a27413c2d56c786f7,PodSandboxId:9a1b2f5fdd5af43b9d38a28fdbf23388272e33f7e2f9f5befc0548ef2d38fd43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844980951992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6610cf254a21e549afff8a9618f8e4bced4bd2b234551760a45c7cc6a21eac,PodSandboxId:ff96186917912e32f70f9926717e83d0e10bfb655afd2696f39f078b4d8093ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844891604542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64babb354f5985db1bd0d45fdbe1006f54e692d53568fd3680231b8d9ede29ef,PodSandboxId:25c8b1d7ef76d0727efa55608ead461cbd6ec7f0f1b3718439af5a27a4d0eb43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1743540844234302164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7477a0ffb744065ef7a50cfeac5a9114c8c4099360c98702671564c89e7c3d68,PodSandboxId:1e96463633daf04e14c8284e1b45149e57725a48c61a90df516c21b523ca2d12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,C
reatedAt:1743540844153523979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32156bc6e4292307e44c3d7b1a8af9c594ff42cb249a8d981464c45a34316cf9,PodSandboxId:b1dc8f242841c090808fa3810fe7353167a688f9673e2838f88af3a6e0388f27,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540839183515796,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db7b531060963d97889adad33f6914853526edeeb5e6e44ea25c2aa6ad9ef32,PodSandboxId:b48c09e5c4927949333629df1e1bf8fa4ba8e0ee61f59050e0f3c015bd2e50e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:17435408391823509
29,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe9dca575efbe13de4f132ea075dc5085700ff0fdb61cda7f60110f8f60d94f,PodSandboxId:e531a13f37dbd550f4e6963f5f5172d927a0fbcb6fe84e95744a39db109af898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743
540839174885465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3a5b98c722d83a3bdd01892ac3875642566835bfd8d198817584b871697e37,PodSandboxId:c4856e4990d920f791b0f6b0e732981c99fd38272c0caecb1785fa6a4a7238ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540839159523357
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bf471ae051472ea4cb18ffa2c909b430d6a37e0c80ddeab71091b0c61cdeb6,PodSandboxId:4910262096b531b09255d5ff345ad633952ef1e56d51def78dc387ed3e4d4332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796374116413,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65c264668fab2c4ebeea0f9706c616b7bc953ffdc13e7a4ec1c83adb3d26967,PodSandboxId:0c595c08681abf8bfc7deda6df45e8b04d070c0b487511832fb2aab92ff5ceb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796324600820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cb8734ab1b431e0da1b1009243932ebf8cf9f96416cef05b2caba93466c63d,PodSandboxId:734efa381feb145c84deb709c342a584a93d2b3fb77513e
71469f11573a24608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540795728355676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3209b3fc86d7b9ed29edbf3ab37430c570b34677e1d44809ece7a6c193de9d55,PodSandboxId:428dd53425aa753b4eeb97730249d3b64cf63829104109dce74fd5b250cb13c4,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743540795670020298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22520a0600eb99b385216db5a42723ef7edb83242d52c9962964b2e2acc5e630,PodSandboxId:b71fda6154e6db79badad95109e56f98026711733f2365aa96bae7cba53f0749,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540785508102512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead3572ff8efba912c679737c17ab5dc3e118116c7ab7ce1da4072731fe65cf,PodSandboxId:096a2367540b013904fbc7a204a3f9726f8cdb5f342a052ca340c42dafeea59d,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540785513733647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c8bc30eab3bd672a27e572014ef5a4c2c205a457f782753bf8effc5e4ea2e3,PodSandboxId:32ca8ac9589d9d807d782f83efc44acb3b27391471b8ade16cbaacead3b70027,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540785447330057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4ae4e0c5691b287c9ad962471296a0596d64b67f26a7abd306524b4e57a6b5,PodSandboxId:e0f3fed915bdd4874233038048aa859e02153925fe986676af2279283227c7fa,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540785414283687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35a19345-2053-4ea5-88c0-81cf51e66fd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.676043215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e3e046e-d56b-447f-8985-fb4f975445fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.676518032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e3e046e-d56b-447f-8985-fb4f975445fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.677047110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a551bc47a6ca1cd786cc36350f71b5325c6a3c5d7f5ef1a27413c2d56c786f7,PodSandboxId:9a1b2f5fdd5af43b9d38a28fdbf23388272e33f7e2f9f5befc0548ef2d38fd43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844980951992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6610cf254a21e549afff8a9618f8e4bced4bd2b234551760a45c7cc6a21eac,PodSandboxId:ff96186917912e32f70f9926717e83d0e10bfb655afd2696f39f078b4d8093ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844891604542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64babb354f5985db1bd0d45fdbe1006f54e692d53568fd3680231b8d9ede29ef,PodSandboxId:25c8b1d7ef76d0727efa55608ead461cbd6ec7f0f1b3718439af5a27a4d0eb43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1743540844234302164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7477a0ffb744065ef7a50cfeac5a9114c8c4099360c98702671564c89e7c3d68,PodSandboxId:1e96463633daf04e14c8284e1b45149e57725a48c61a90df516c21b523ca2d12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,C
reatedAt:1743540844153523979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32156bc6e4292307e44c3d7b1a8af9c594ff42cb249a8d981464c45a34316cf9,PodSandboxId:b1dc8f242841c090808fa3810fe7353167a688f9673e2838f88af3a6e0388f27,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540839183515796,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db7b531060963d97889adad33f6914853526edeeb5e6e44ea25c2aa6ad9ef32,PodSandboxId:b48c09e5c4927949333629df1e1bf8fa4ba8e0ee61f59050e0f3c015bd2e50e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:17435408391823509
29,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe9dca575efbe13de4f132ea075dc5085700ff0fdb61cda7f60110f8f60d94f,PodSandboxId:e531a13f37dbd550f4e6963f5f5172d927a0fbcb6fe84e95744a39db109af898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743
540839174885465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3a5b98c722d83a3bdd01892ac3875642566835bfd8d198817584b871697e37,PodSandboxId:c4856e4990d920f791b0f6b0e732981c99fd38272c0caecb1785fa6a4a7238ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540839159523357
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bf471ae051472ea4cb18ffa2c909b430d6a37e0c80ddeab71091b0c61cdeb6,PodSandboxId:4910262096b531b09255d5ff345ad633952ef1e56d51def78dc387ed3e4d4332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796374116413,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65c264668fab2c4ebeea0f9706c616b7bc953ffdc13e7a4ec1c83adb3d26967,PodSandboxId:0c595c08681abf8bfc7deda6df45e8b04d070c0b487511832fb2aab92ff5ceb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796324600820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cb8734ab1b431e0da1b1009243932ebf8cf9f96416cef05b2caba93466c63d,PodSandboxId:734efa381feb145c84deb709c342a584a93d2b3fb77513e
71469f11573a24608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540795728355676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3209b3fc86d7b9ed29edbf3ab37430c570b34677e1d44809ece7a6c193de9d55,PodSandboxId:428dd53425aa753b4eeb97730249d3b64cf63829104109dce74fd5b250cb13c4,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743540795670020298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22520a0600eb99b385216db5a42723ef7edb83242d52c9962964b2e2acc5e630,PodSandboxId:b71fda6154e6db79badad95109e56f98026711733f2365aa96bae7cba53f0749,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540785508102512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead3572ff8efba912c679737c17ab5dc3e118116c7ab7ce1da4072731fe65cf,PodSandboxId:096a2367540b013904fbc7a204a3f9726f8cdb5f342a052ca340c42dafeea59d,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540785513733647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c8bc30eab3bd672a27e572014ef5a4c2c205a457f782753bf8effc5e4ea2e3,PodSandboxId:32ca8ac9589d9d807d782f83efc44acb3b27391471b8ade16cbaacead3b70027,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540785447330057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4ae4e0c5691b287c9ad962471296a0596d64b67f26a7abd306524b4e57a6b5,PodSandboxId:e0f3fed915bdd4874233038048aa859e02153925fe986676af2279283227c7fa,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540785414283687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e3e046e-d56b-447f-8985-fb4f975445fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.719345110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2735c48a-0f98-4a25-a50c-be41c79d7075 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.719431842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2735c48a-0f98-4a25-a50c-be41c79d7075 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.720457147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=569d16e4-2427-4cb6-9e7e-81e65ce37cd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.720805428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540847720784363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=569d16e4-2427-4cb6-9e7e-81e65ce37cd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.721955782Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0295927e-1d54-46e7-9ac1-29a00e5c6af2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.722006635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0295927e-1d54-46e7-9ac1-29a00e5c6af2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:54:07 kubernetes-upgrade-881088 crio[2318]: time="2025-04-01 20:54:07.722428419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a551bc47a6ca1cd786cc36350f71b5325c6a3c5d7f5ef1a27413c2d56c786f7,PodSandboxId:9a1b2f5fdd5af43b9d38a28fdbf23388272e33f7e2f9f5befc0548ef2d38fd43,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844980951992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c6610cf254a21e549afff8a9618f8e4bced4bd2b234551760a45c7cc6a21eac,PodSandboxId:ff96186917912e32f70f9926717e83d0e10bfb655afd2696f39f078b4d8093ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540844891604542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64babb354f5985db1bd0d45fdbe1006f54e692d53568fd3680231b8d9ede29ef,PodSandboxId:25c8b1d7ef76d0727efa55608ead461cbd6ec7f0f1b3718439af5a27a4d0eb43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1743540844234302164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7477a0ffb744065ef7a50cfeac5a9114c8c4099360c98702671564c89e7c3d68,PodSandboxId:1e96463633daf04e14c8284e1b45149e57725a48c61a90df516c21b523ca2d12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,C
reatedAt:1743540844153523979,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32156bc6e4292307e44c3d7b1a8af9c594ff42cb249a8d981464c45a34316cf9,PodSandboxId:b1dc8f242841c090808fa3810fe7353167a688f9673e2838f88af3a6e0388f27,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540839183515796,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db7b531060963d97889adad33f6914853526edeeb5e6e44ea25c2aa6ad9ef32,PodSandboxId:b48c09e5c4927949333629df1e1bf8fa4ba8e0ee61f59050e0f3c015bd2e50e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:17435408391823509
29,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe9dca575efbe13de4f132ea075dc5085700ff0fdb61cda7f60110f8f60d94f,PodSandboxId:e531a13f37dbd550f4e6963f5f5172d927a0fbcb6fe84e95744a39db109af898,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743
540839174885465,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad3a5b98c722d83a3bdd01892ac3875642566835bfd8d198817584b871697e37,PodSandboxId:c4856e4990d920f791b0f6b0e732981c99fd38272c0caecb1785fa6a4a7238ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540839159523357
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bf471ae051472ea4cb18ffa2c909b430d6a37e0c80ddeab71091b0c61cdeb6,PodSandboxId:4910262096b531b09255d5ff345ad633952ef1e56d51def78dc387ed3e4d4332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796374116413,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g2cjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4b1e78-8165-4c5e-b788-3bd135190ee4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b65c264668fab2c4ebeea0f9706c616b7bc953ffdc13e7a4ec1c83adb3d26967,PodSandboxId:0c595c08681abf8bfc7deda6df45e8b04d070c0b487511832fb2aab92ff5ceb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540796324600820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gxwm9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96853fed-fcad-4f9f-abe6-16990348547f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55cb8734ab1b431e0da1b1009243932ebf8cf9f96416cef05b2caba93466c63d,PodSandboxId:734efa381feb145c84deb709c342a584a93d2b3fb77513e
71469f11573a24608,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540795728355676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9rmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23aab68-46c8-43ff-9079-b575a515ed5f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3209b3fc86d7b9ed29edbf3ab37430c570b34677e1d44809ece7a6c193de9d55,PodSandboxId:428dd53425aa753b4eeb97730249d3b64cf63829104109dce74fd5b250cb13c4,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1743540795670020298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25488b3e-ec76-44f9-ace3-19cc0ebc2c39,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22520a0600eb99b385216db5a42723ef7edb83242d52c9962964b2e2acc5e630,PodSandboxId:b71fda6154e6db79badad95109e56f98026711733f2365aa96bae7cba53f0749,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540785508102512,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca4e5e1c769a1263c57d047c2faa1e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead3572ff8efba912c679737c17ab5dc3e118116c7ab7ce1da4072731fe65cf,PodSandboxId:096a2367540b013904fbc7a204a3f9726f8cdb5f342a052ca340c42dafeea59d,Metadata:&ContainerMetadata{Name:kub
e-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540785513733647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c7c8a46bb95981fd46cc2df440ce1d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c8bc30eab3bd672a27e572014ef5a4c2c205a457f782753bf8effc5e4ea2e3,PodSandboxId:32ca8ac9589d9d807d782f83efc44acb3b27391471b8ade16cbaacead3b70027,Metadata:&ContainerMetadata{Name:kube-cont
roller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540785447330057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd30459878bb2568b459adb0f0da4e4,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4ae4e0c5691b287c9ad962471296a0596d64b67f26a7abd306524b4e57a6b5,PodSandboxId:e0f3fed915bdd4874233038048aa859e02153925fe986676af2279283227c7fa,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540785414283687,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-881088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8ef573c808d247a73d06c6235286b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0295927e-1d54-46e7-9ac1-29a00e5c6af2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1a551bc47a6ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago        Running             coredns                   1                   9a1b2f5fdd5af       coredns-668d6bf9bc-gxwm9
	2c6610cf254a2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago        Running             coredns                   1                   ff96186917912       coredns-668d6bf9bc-g2cjk
	64babb354f598       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       1                   25c8b1d7ef76d       storage-provisioner
	7477a0ffb7440       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   3 seconds ago        Running             kube-proxy                1                   1e96463633daf       kube-proxy-x9rmw
	32156bc6e4292       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   8 seconds ago        Running             kube-scheduler            1                   b1dc8f242841c       kube-scheduler-kubernetes-upgrade-881088
	0db7b53106096       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   8 seconds ago        Running             kube-controller-manager   1                   b48c09e5c4927       kube-controller-manager-kubernetes-upgrade-881088
	bbe9dca575efb       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   8 seconds ago        Running             kube-apiserver            1                   e531a13f37dbd       kube-apiserver-kubernetes-upgrade-881088
	ad3a5b98c722d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   8 seconds ago        Running             etcd                      1                   c4856e4990d92       etcd-kubernetes-upgrade-881088
	14bf471ae0514       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   51 seconds ago       Exited              coredns                   0                   4910262096b53       coredns-668d6bf9bc-g2cjk
	b65c264668fab       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   51 seconds ago       Exited              coredns                   0                   0c595c08681ab       coredns-668d6bf9bc-gxwm9
	55cb8734ab1b4       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   52 seconds ago       Exited              kube-proxy                0                   734efa381feb1       kube-proxy-x9rmw
	3209b3fc86d7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   52 seconds ago       Exited              storage-provisioner       0                   428dd53425aa7       storage-provisioner
	3ead3572ff8ef       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   About a minute ago   Exited              kube-scheduler            0                   096a2367540b0       kube-scheduler-kubernetes-upgrade-881088
	22520a0600eb9       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   About a minute ago   Exited              kube-apiserver            0                   b71fda6154e6d       kube-apiserver-kubernetes-upgrade-881088
	06c8bc30eab3b       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   About a minute ago   Exited              kube-controller-manager   0                   32ca8ac9589d9       kube-controller-manager-kubernetes-upgrade-881088
	2f4ae4e0c5691       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   About a minute ago   Exited              etcd                      0                   e0f3fed915bdd       etcd-kubernetes-upgrade-881088
	
	
	==> coredns [14bf471ae051472ea4cb18ffa2c909b430d6a37e0c80ddeab71091b0c61cdeb6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1039708423]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (01-Apr-2025 20:53:16.695) (total time: 26963ms):
	Trace[1039708423]: [26.963616036s] [26.963616036s] END
	[INFO] plugin/kubernetes: Trace[953648151]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (01-Apr-2025 20:53:16.692) (total time: 26966ms):
	Trace[953648151]: [26.9669329s] [26.9669329s] END
	[INFO] plugin/kubernetes: Trace[1304848742]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (01-Apr-2025 20:53:16.691) (total time: 26969ms):
	Trace[1304848742]: [26.969855449s] [26.969855449s] END
	
	
	==> coredns [1a551bc47a6ca1cd786cc36350f71b5325c6a3c5d7f5ef1a27413c2d56c786f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [2c6610cf254a21e549afff8a9618f8e4bced4bd2b234551760a45c7cc6a21eac] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b65c264668fab2c4ebeea0f9706c616b7bc953ffdc13e7a4ec1c83adb3d26967] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[946878438]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (01-Apr-2025 20:53:16.695) (total time: 26957ms):
	Trace[946878438]: [26.957110295s] [26.957110295s] END
	[INFO] plugin/kubernetes: Trace[795964133]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (01-Apr-2025 20:53:16.692) (total time: 26961ms):
	Trace[795964133]: [26.961003434s] [26.961003434s] END
	[INFO] plugin/kubernetes: Trace[1036452875]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (01-Apr-2025 20:53:16.691) (total time: 26961ms):
	Trace[1036452875]: [26.961645014s] [26.961645014s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-881088
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-881088
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:53:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-881088
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:54:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:54:02 +0000   Tue, 01 Apr 2025 20:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:54:02 +0000   Tue, 01 Apr 2025 20:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:54:02 +0000   Tue, 01 Apr 2025 20:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Apr 2025 20:54:02 +0000   Tue, 01 Apr 2025 20:53:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    kubernetes-upgrade-881088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be6ab45a4a984097a47c0a112ed83cc8
	  System UUID:                be6ab45a-4a98-4097-a47c-0a112ed83cc8
	  Boot ID:                    20da3149-fae7-44f1-82f1-f1c2228b11f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-g2cjk                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     53s
	  kube-system                 coredns-668d6bf9bc-gxwm9                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     53s
	  kube-system                 etcd-kubernetes-upgrade-881088                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         54s
	  kube-system                 kube-apiserver-kubernetes-upgrade-881088             250m (12%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-881088    200m (10%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-x9rmw                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-kubernetes-upgrade-881088             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-881088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-881088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 64s)  kubelet          Node kubernetes-upgrade-881088 status is now: NodeHasSufficientPID
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           54s                node-controller  Node kubernetes-upgrade-881088 event: Registered Node kubernetes-upgrade-881088 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-881088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-881088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-881088 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node kubernetes-upgrade-881088 event: Registered Node kubernetes-upgrade-881088 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.520155] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.061066] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082884] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.217895] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.130162] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.347435] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[Apr 1 20:53] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[  +0.061301] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.740949] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +7.064469] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.089903] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.145733] kauditd_printk_skb: 58 callbacks suppressed
	[ +34.162178] systemd-fstab-generator[2243]: Ignoring "noauto" option for root device
	[  +0.084135] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.075283] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.212252] systemd-fstab-generator[2269]: Ignoring "noauto" option for root device
	[  +0.177712] systemd-fstab-generator[2281]: Ignoring "noauto" option for root device
	[  +0.332907] systemd-fstab-generator[2309]: Ignoring "noauto" option for root device
	[  +4.545314] systemd-fstab-generator[2464]: Ignoring "noauto" option for root device
	[  +0.090595] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.284798] systemd-fstab-generator[2589]: Ignoring "noauto" option for root device
	[Apr 1 20:54] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.651865] systemd-fstab-generator[3614]: Ignoring "noauto" option for root device
	
	
	==> etcd [2f4ae4e0c5691b287c9ad962471296a0596d64b67f26a7abd306524b4e57a6b5] <==
	{"level":"info","ts":"2025-04-01T20:53:06.282480Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:53:06.283289Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:53:06.283808Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:53:06.287788Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:53:06.297276Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:53:06.297312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:53:24.765412Z","caller":"traceutil/trace.go:171","msg":"trace[404948498] linearizableReadLoop","detail":"{readStateIndex:393; appliedIndex:392; }","duration":"174.342591ms","start":"2025-04-01T20:53:24.591046Z","end":"2025-04-01T20:53:24.765389Z","steps":["trace[404948498] 'read index received'  (duration: 174.120176ms)","trace[404948498] 'applied index is now lower than readState.Index'  (duration: 222.003µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-01T20:53:24.765540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.477995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-01T20:53:24.765604Z","caller":"traceutil/trace.go:171","msg":"trace[768709615] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:385; }","duration":"174.600937ms","start":"2025-04-01T20:53:24.590996Z","end":"2025-04-01T20:53:24.765597Z","steps":["trace[768709615] 'agreement among raft nodes before linearized reading'  (duration: 174.470513ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:53:24.765791Z","caller":"traceutil/trace.go:171","msg":"trace[104498358] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"320.040983ms","start":"2025-04-01T20:53:24.445737Z","end":"2025-04-01T20:53:24.765778Z","steps":["trace[104498358] 'process raft request'  (duration: 319.477153ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:53:24.766181Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-01T20:53:24.445722Z","time spent":"320.083406ms","remote":"127.0.0.1:39530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5886,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" mod_revision:304 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" value_size:5821 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" > >"}
	{"level":"warn","ts":"2025-04-01T20:53:25.386606Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.987815ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814271096364625025 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" mod_revision:385 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" value_size:5649 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-04-01T20:53:25.386838Z","caller":"traceutil/trace.go:171","msg":"trace[331058294] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"609.685148ms","start":"2025-04-01T20:53:24.777135Z","end":"2025-04-01T20:53:25.386820Z","steps":["trace[331058294] 'process raft request'  (duration: 317.306261ms)","trace[331058294] 'compare'  (duration: 291.840095ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-01T20:53:25.386968Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-01T20:53:24.777120Z","time spent":"609.778482ms","remote":"127.0.0.1:39530","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5714,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" mod_revision:385 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" value_size:5649 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-881088\" > >"}
	{"level":"warn","ts":"2025-04-01T20:53:25.530607Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.905048ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814271096364625026 > lease_revoke:<id:192d95f321126ae0>","response":"size:29"}
	{"level":"info","ts":"2025-04-01T20:53:43.653671Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-01T20:53:43.653754Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"kubernetes-upgrade-881088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	{"level":"warn","ts":"2025-04-01T20:53:43.653869Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:53:43.653991Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:53:43.738538Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:53:43.738600Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-01T20:53:43.738663Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8fbc2df34e14192d","current-leader-member-id":"8fbc2df34e14192d"}
	{"level":"info","ts":"2025-04-01T20:53:43.741317Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2025-04-01T20:53:43.741447Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2025-04-01T20:53:43.741481Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"kubernetes-upgrade-881088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	
	
	==> etcd [ad3a5b98c722d83a3bdd01892ac3875642566835bfd8d198817584b871697e37] <==
	{"level":"info","ts":"2025-04-01T20:53:59.876751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d switched to configuration voters=(10357203766055541037)"}
	{"level":"info","ts":"2025-04-01T20:53:59.876991Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","added-peer-id":"8fbc2df34e14192d","added-peer-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2025-04-01T20:53:59.877424Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:53:59.877632Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:53:59.895740Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-01T20:53:59.896083Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2025-04-01T20:53:59.905147Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2025-04-01T20:53:59.896511Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:53:59.896534Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:54:00.977450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:54:00.977526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:54:00.977556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2025-04-01T20:54:00.977580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:54:00.977599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2025-04-01T20:54:00.977631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:54:00.977641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2025-04-01T20:54:01.017459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:54:01.017909Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:54:01.017992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:54:01.017449Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:kubernetes-upgrade-881088 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:54:01.017491Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:54:01.019020Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:54:01.020031Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2025-04-01T20:54:01.021685Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:54:01.022415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:54:08 up 1 min,  0 users,  load average: 1.05, 0.29, 0.10
	Linux kubernetes-upgrade-881088 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [22520a0600eb99b385216db5a42723ef7edb83242d52c9962964b2e2acc5e630] <==
	I0401 20:53:08.916978       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:53:08.917012       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:53:09.599621       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:53:09.646371       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:53:09.728012       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:53:09.737984       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.185]
	I0401 20:53:09.740489       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:53:09.751190       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:53:09.981744       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:53:10.811876       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:53:10.828316       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:53:10.840447       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:53:15.132208       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0401 20:53:15.187929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:53:43.651783       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0401 20:53:43.674002       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.674437       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.674514       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.674559       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.677017       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.681125       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.681438       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.682718       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.683767       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:53:43.683800       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bbe9dca575efbe13de4f132ea075dc5085700ff0fdb61cda7f60110f8f60d94f] <==
	I0401 20:54:02.673149       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0401 20:54:02.673517       1 aggregator.go:171] initial CRD sync complete...
	I0401 20:54:02.673584       1 autoregister_controller.go:144] Starting autoregister controller
	I0401 20:54:02.673612       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:54:02.673638       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:54:02.674932       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0401 20:54:02.674980       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0401 20:54:02.678579       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0401 20:54:02.678940       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 20:54:02.689410       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0401 20:54:02.698179       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 20:54:02.702619       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0401 20:54:02.712955       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:54:02.713011       1 policy_source.go:240] refreshing policies
	I0401 20:54:02.777407       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0401 20:54:02.777653       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:54:03.586732       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:54:03.602145       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:54:04.421965       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:54:05.123046       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:54:05.206585       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:54:05.288650       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:54:05.300438       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:54:06.153833       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:54:06.346832       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [06c8bc30eab3bd672a27e572014ef5a4c2c205a457f782753bf8effc5e4ea2e3] <==
	I0401 20:53:14.577720       1 shared_informer.go:320] Caches are synced for stateful set
	I0401 20:53:14.579665       1 shared_informer.go:320] Caches are synced for deployment
	I0401 20:53:14.579787       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0401 20:53:14.579802       1 shared_informer.go:320] Caches are synced for service account
	I0401 20:53:14.579812       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:53:14.586912       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-881088"
	I0401 20:53:14.579826       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:53:14.580829       1 shared_informer.go:320] Caches are synced for GC
	I0401 20:53:14.581020       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0401 20:53:14.585311       1 shared_informer.go:320] Caches are synced for node
	I0401 20:53:14.589381       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0401 20:53:14.590308       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0401 20:53:14.590361       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:53:14.590390       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:53:14.604027       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-881088" podCIDRs=["10.244.0.0/24"]
	I0401 20:53:14.604174       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-881088"
	I0401 20:53:14.604281       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-881088"
	I0401 20:53:15.091403       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-881088"
	I0401 20:53:15.489488       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="288.117562ms"
	I0401 20:53:15.539941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.414559ms"
	I0401 20:53:15.573631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="33.65259ms"
	I0401 20:53:15.574540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="159.785µs"
	I0401 20:53:17.331032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.905µs"
	I0401 20:53:17.353531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="111.68µs"
	I0401 20:53:18.424618       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-881088"
	
	
	==> kube-controller-manager [0db7b531060963d97889adad33f6914853526edeeb5e6e44ea25c2aa6ad9ef32] <==
	I0401 20:54:05.897402       1 shared_informer.go:320] Caches are synced for disruption
	I0401 20:54:05.900607       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:54:05.906108       1 shared_informer.go:320] Caches are synced for HPA
	I0401 20:54:05.913190       1 shared_informer.go:320] Caches are synced for PVC protection
	I0401 20:54:05.915312       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0401 20:54:05.915524       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:54:05.915643       1 shared_informer.go:320] Caches are synced for cronjob
	I0401 20:54:05.928366       1 shared_informer.go:320] Caches are synced for endpoint
	I0401 20:54:05.930306       1 shared_informer.go:320] Caches are synced for node
	I0401 20:54:05.930549       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0401 20:54:05.930588       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0401 20:54:05.930594       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:54:05.930658       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:54:05.930789       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-881088"
	I0401 20:54:05.943138       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:54:05.955503       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:54:05.955603       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0401 20:54:05.955612       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0401 20:54:05.956307       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:54:05.959422       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-881088"
	I0401 20:54:05.964757       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:54:06.164878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="273.355743ms"
	I0401 20:54:06.165020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.875µs"
	I0401 20:54:06.914202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.5782ms"
	I0401 20:54:06.914876       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.501µs"
	
	
	==> kube-proxy [55cb8734ab1b431e0da1b1009243932ebf8cf9f96416cef05b2caba93466c63d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0401 20:53:16.248703       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0401 20:53:16.370134       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0401 20:53:16.370369       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:53:16.510909       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0401 20:53:16.510956       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 20:53:16.510995       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:53:16.528543       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:53:16.529380       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:53:16.530311       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:53:16.534348       1 config.go:199] "Starting service config controller"
	I0401 20:53:16.534837       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:53:16.534879       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:53:16.534887       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:53:16.538181       1 config.go:329] "Starting node config controller"
	I0401 20:53:16.538208       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:53:16.639596       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:53:16.644687       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:53:16.644981       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [7477a0ffb744065ef7a50cfeac5a9114c8c4099360c98702671564c89e7c3d68] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0401 20:54:04.691776       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0401 20:54:04.731870       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0401 20:54:04.731963       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:54:04.856371       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0401 20:54:04.856438       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 20:54:04.856472       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:54:04.869527       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:54:04.870146       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:54:04.870159       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:54:04.876997       1 config.go:199] "Starting service config controller"
	I0401 20:54:04.887366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:54:04.887444       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:54:04.887450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:54:04.887973       1 config.go:329] "Starting node config controller"
	I0401 20:54:04.888008       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:54:04.988691       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:54:04.988723       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:54:04.988732       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [32156bc6e4292307e44c3d7b1a8af9c594ff42cb249a8d981464c45a34316cf9] <==
	I0401 20:54:00.462155       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:54:02.644429       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:54:02.644479       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:54:02.644491       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:54:02.644500       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:54:02.697528       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:54:02.697712       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:54:02.701517       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:54:02.701566       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:54:02.702599       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:54:02.701602       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:54:02.803691       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3ead3572ff8efba912c679737c17ab5dc3e118116c7ab7ce1da4072731fe65cf] <==
	E0401 20:53:08.017774       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:08.017872       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:53:08.017908       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:08.017945       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 20:53:08.017956       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:08.865392       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:53:08.865508       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 20:53:08.888595       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:53:08.888693       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:09.033792       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0401 20:53:09.033916       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:09.051601       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:53:09.052103       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:09.099986       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 20:53:09.100372       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:09.100314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:53:09.100585       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:09.111434       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:53:09.111514       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:09.147818       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 20:53:09.147938       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:53:09.224571       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:53:09.224821       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0401 20:53:10.707763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0401 20:53:43.650982       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 01 20:54:01 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:01.587854    2596 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-881088\" not found" node="kubernetes-upgrade-881088"
	Apr 01 20:54:01 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:01.588810    2596 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-881088\" not found" node="kubernetes-upgrade-881088"
	Apr 01 20:54:01 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:01.589105    2596 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-881088\" not found" node="kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.751265    2596 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.769402    2596 kubelet_node_status.go:125] "Node was previously registered" node="kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.769861    2596 kubelet_node_status.go:79] "Successfully registered node" node="kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.770083    2596 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.772029    2596 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:02.805716    2596 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-881088\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.806014    2596 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:02.822590    2596 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-881088\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.822734    2596 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:02.838140    2596 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-881088\" already exists" pod="kube-system/etcd-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.838387    2596 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:02.848180    2596 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-881088\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:02.873474    2596 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-881088"
	Apr 01 20:54:02 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:02.883967    2596 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-881088\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-881088"
	Apr 01 20:54:03 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:03.434326    2596 apiserver.go:52] "Watching apiserver"
	Apr 01 20:54:03 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:03.539795    2596 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 01 20:54:03 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:03.577149    2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f23aab68-46c8-43ff-9079-b575a515ed5f-lib-modules\") pod \"kube-proxy-x9rmw\" (UID: \"f23aab68-46c8-43ff-9079-b575a515ed5f\") " pod="kube-system/kube-proxy-x9rmw"
	Apr 01 20:54:03 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:03.577290    2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f23aab68-46c8-43ff-9079-b575a515ed5f-xtables-lock\") pod \"kube-proxy-x9rmw\" (UID: \"f23aab68-46c8-43ff-9079-b575a515ed5f\") " pod="kube-system/kube-proxy-x9rmw"
	Apr 01 20:54:03 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:03.577522    2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/25488b3e-ec76-44f9-ace3-19cc0ebc2c39-tmp\") pod \"storage-provisioner\" (UID: \"25488b3e-ec76-44f9-ace3-19cc0ebc2c39\") " pod="kube-system/storage-provisioner"
	Apr 01 20:54:06 kubernetes-upgrade-881088 kubelet[2596]: I0401 20:54:06.878386    2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 01 20:54:08 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:08.579091    2596 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540848578421859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:54:08 kubernetes-upgrade-881088 kubelet[2596]: E0401 20:54:08.579151    2596 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540848578421859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3209b3fc86d7b9ed29edbf3ab37430c570b34677e1d44809ece7a6c193de9d55] <==
	I0401 20:53:15.796214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [64babb354f5985db1bd0d45fdbe1006f54e692d53568fd3680231b8d9ede29ef] <==
	I0401 20:54:04.377694       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 20:54:04.407978       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 20:54:04.408166       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 20:54:04.436431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 20:54:04.436926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-881088_f925c3db-8c20-4070-898a-84d95f363c63!
	I0401 20:54:04.439556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7fcc433a-94a4-4506-ab89-d2bcfdf646c2", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-881088_f925c3db-8c20-4070-898a-84d95f363c63 became leader
	I0401 20:54:04.537409       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-881088_f925c3db-8c20-4070-898a-84d95f363c63!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-881088 -n kubernetes-upgrade-881088
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-881088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-881088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-881088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-881088: (1.140674265s)
--- FAIL: TestKubernetesUpgrade (397.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-854311 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-854311 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.147419258s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-854311] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-854311" primary control-plane node in "pause-854311" cluster
	* Updating the running kvm2 "pause-854311" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-854311" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:50:20.468630   52245 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:50:20.468796   52245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:50:20.468813   52245 out.go:358] Setting ErrFile to fd 2...
	I0401 20:50:20.468830   52245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:50:20.469052   52245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:50:20.469740   52245 out.go:352] Setting JSON to false
	I0401 20:50:20.471098   52245 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5564,"bootTime":1743535056,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:50:20.471267   52245 start.go:139] virtualization: kvm guest
	I0401 20:50:20.473718   52245 out.go:177] * [pause-854311] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:50:20.475274   52245 notify.go:220] Checking for updates...
	I0401 20:50:20.475325   52245 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:50:20.477006   52245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:50:20.478557   52245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:50:20.479770   52245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:50:20.481230   52245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:50:20.482353   52245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:50:20.483901   52245 config.go:182] Loaded profile config "pause-854311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:50:20.484373   52245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:50:20.484450   52245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:50:20.500986   52245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0401 20:50:20.501499   52245 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:50:20.502089   52245 main.go:141] libmachine: Using API Version  1
	I0401 20:50:20.502116   52245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:50:20.502494   52245 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:50:20.502665   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:20.502962   52245 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:50:20.503377   52245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:50:20.503423   52245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:50:20.519089   52245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I0401 20:50:20.519643   52245 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:50:20.520291   52245 main.go:141] libmachine: Using API Version  1
	I0401 20:50:20.520331   52245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:50:20.520723   52245 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:50:20.520895   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:20.561310   52245 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 20:50:20.562778   52245 start.go:297] selected driver: kvm2
	I0401 20:50:20.562810   52245 start.go:901] validating driver "kvm2" against &{Name:pause-854311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-854311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:
false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:50:20.562936   52245 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:50:20.563240   52245 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:50:20.563304   52245 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 20:50:20.579428   52245 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 20:50:20.580470   52245 cni.go:84] Creating CNI manager for ""
	I0401 20:50:20.580537   52245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:50:20.580607   52245 start.go:340] cluster config:
	{Name:pause-854311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-854311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliase
s:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:50:20.580775   52245 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:50:20.582811   52245 out.go:177] * Starting "pause-854311" primary control-plane node in "pause-854311" cluster
	I0401 20:50:20.584050   52245 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:50:20.584095   52245 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:50:20.584108   52245 cache.go:56] Caching tarball of preloaded images
	I0401 20:50:20.584205   52245 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:50:20.584221   52245 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:50:20.584407   52245 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/config.json ...
	I0401 20:50:20.584662   52245 start.go:360] acquireMachinesLock for pause-854311: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 20:50:20.584717   52245 start.go:364] duration metric: took 31.216µs to acquireMachinesLock for "pause-854311"
	I0401 20:50:20.584738   52245 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:50:20.584748   52245 fix.go:54] fixHost starting: 
	I0401 20:50:20.585148   52245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:50:20.585199   52245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:50:20.600564   52245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0401 20:50:20.600995   52245 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:50:20.601478   52245 main.go:141] libmachine: Using API Version  1
	I0401 20:50:20.601505   52245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:50:20.601873   52245 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:50:20.602061   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:20.602202   52245 main.go:141] libmachine: (pause-854311) Calling .GetState
	I0401 20:50:20.603933   52245 fix.go:112] recreateIfNeeded on pause-854311: state=Running err=<nil>
	W0401 20:50:20.603955   52245 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:50:20.605822   52245 out.go:177] * Updating the running kvm2 "pause-854311" VM ...
	I0401 20:50:20.607235   52245 machine.go:93] provisionDockerMachine start ...
	I0401 20:50:20.607258   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:20.607516   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:20.610456   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.610999   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:20.611027   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.611176   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:20.611341   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:20.611494   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:20.611629   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:20.611776   52245 main.go:141] libmachine: Using SSH client type: native
	I0401 20:50:20.611997   52245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0401 20:50:20.612007   52245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:50:20.730826   52245 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-854311
	
	I0401 20:50:20.730853   52245 main.go:141] libmachine: (pause-854311) Calling .GetMachineName
	I0401 20:50:20.731089   52245 buildroot.go:166] provisioning hostname "pause-854311"
	I0401 20:50:20.731112   52245 main.go:141] libmachine: (pause-854311) Calling .GetMachineName
	I0401 20:50:20.731275   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:20.734440   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.734871   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:20.734897   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.735128   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:20.735303   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:20.735485   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:20.735606   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:20.735756   52245 main.go:141] libmachine: Using SSH client type: native
	I0401 20:50:20.736005   52245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0401 20:50:20.736020   52245 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-854311 && echo "pause-854311" | sudo tee /etc/hostname
	I0401 20:50:20.863557   52245 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-854311
	
	I0401 20:50:20.863584   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:20.867198   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.867696   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:20.867737   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.867923   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:20.868108   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:20.868320   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:20.868522   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:20.868701   52245 main.go:141] libmachine: Using SSH client type: native
	I0401 20:50:20.868912   52245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0401 20:50:20.868933   52245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-854311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-854311/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-854311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:50:20.987502   52245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:50:20.987573   52245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 20:50:20.987619   52245 buildroot.go:174] setting up certificates
	I0401 20:50:20.987631   52245 provision.go:84] configureAuth start
	I0401 20:50:20.987650   52245 main.go:141] libmachine: (pause-854311) Calling .GetMachineName
	I0401 20:50:20.987932   52245 main.go:141] libmachine: (pause-854311) Calling .GetIP
	I0401 20:50:20.990667   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.991050   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:20.991087   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.991242   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:20.993538   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.993918   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:20.993951   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:20.994069   52245 provision.go:143] copyHostCerts
	I0401 20:50:20.994131   52245 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 20:50:20.994144   52245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 20:50:20.994233   52245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 20:50:20.994332   52245 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 20:50:20.994341   52245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 20:50:20.994361   52245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 20:50:20.994410   52245 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 20:50:20.994417   52245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 20:50:20.994434   52245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 20:50:20.994495   52245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.pause-854311 san=[127.0.0.1 192.168.83.73 localhost minikube pause-854311]
	I0401 20:50:21.345500   52245 provision.go:177] copyRemoteCerts
	I0401 20:50:21.345559   52245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:50:21.345586   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:21.348437   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:21.348767   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:21.348801   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:21.348963   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:21.349150   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:21.349317   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:21.349449   52245 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/pause-854311/id_rsa Username:docker}
	I0401 20:50:21.441717   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:50:21.476438   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0401 20:50:21.504991   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:50:21.533864   52245 provision.go:87] duration metric: took 546.214476ms to configureAuth
	I0401 20:50:21.533896   52245 buildroot.go:189] setting minikube options for container-runtime
	I0401 20:50:21.534158   52245 config.go:182] Loaded profile config "pause-854311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:50:21.534274   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:21.536957   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:21.537342   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:21.537378   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:21.537510   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:21.537712   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:21.537885   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:21.538052   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:21.538235   52245 main.go:141] libmachine: Using SSH client type: native
	I0401 20:50:21.538439   52245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0401 20:50:21.538452   52245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:50:27.537481   52245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:50:27.537510   52245 machine.go:96] duration metric: took 6.930259893s to provisionDockerMachine
	I0401 20:50:27.537524   52245 start.go:293] postStartSetup for "pause-854311" (driver="kvm2")
	I0401 20:50:27.537536   52245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:50:27.537578   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:27.537994   52245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:50:27.538024   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:27.541154   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.541695   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:27.541726   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.541939   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:27.542120   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:27.542308   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:27.542485   52245 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/pause-854311/id_rsa Username:docker}
	I0401 20:50:27.641938   52245 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:50:27.646536   52245 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 20:50:27.646561   52245 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 20:50:27.646626   52245 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 20:50:27.646748   52245 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 20:50:27.646874   52245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:50:27.657308   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:50:27.688749   52245 start.go:296] duration metric: took 151.209393ms for postStartSetup
	I0401 20:50:27.688799   52245 fix.go:56] duration metric: took 7.104051283s for fixHost
	I0401 20:50:27.688824   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:27.691608   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.691985   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:27.692017   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.692146   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:27.692318   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:27.692462   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:27.692643   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:27.692821   52245 main.go:141] libmachine: Using SSH client type: native
	I0401 20:50:27.693014   52245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.83.73 22 <nil> <nil>}
	I0401 20:50:27.693024   52245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 20:50:27.811321   52245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743540627.803574446
	
	I0401 20:50:27.811350   52245 fix.go:216] guest clock: 1743540627.803574446
	I0401 20:50:27.811361   52245 fix.go:229] Guest: 2025-04-01 20:50:27.803574446 +0000 UTC Remote: 2025-04-01 20:50:27.688804371 +0000 UTC m=+7.260674203 (delta=114.770075ms)
	I0401 20:50:27.811404   52245 fix.go:200] guest clock delta is within tolerance: 114.770075ms
	I0401 20:50:27.811411   52245 start.go:83] releasing machines lock for "pause-854311", held for 7.226680284s
	I0401 20:50:27.811440   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:27.811720   52245 main.go:141] libmachine: (pause-854311) Calling .GetIP
	I0401 20:50:27.814782   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.815256   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:27.815291   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.815482   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:27.815956   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:27.816116   52245 main.go:141] libmachine: (pause-854311) Calling .DriverName
	I0401 20:50:27.816202   52245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:50:27.816249   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:27.816322   52245 ssh_runner.go:195] Run: cat /version.json
	I0401 20:50:27.816363   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHHostname
	I0401 20:50:27.818993   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.819391   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:27.819425   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.819437   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.819598   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:27.819811   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:27.819937   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:27.819967   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:27.820035   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:27.820134   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHPort
	I0401 20:50:27.820456   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHKeyPath
	I0401 20:50:27.820488   52245 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/pause-854311/id_rsa Username:docker}
	I0401 20:50:27.820605   52245 main.go:141] libmachine: (pause-854311) Calling .GetSSHUsername
	I0401 20:50:27.820738   52245 sshutil.go:53] new ssh client: &{IP:192.168.83.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/pause-854311/id_rsa Username:docker}
	I0401 20:50:27.924891   52245 ssh_runner.go:195] Run: systemctl --version
	I0401 20:50:27.934152   52245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:50:28.109571   52245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 20:50:28.116441   52245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 20:50:28.116514   52245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:50:28.133442   52245 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:50:28.133472   52245 start.go:495] detecting cgroup driver to use...
	I0401 20:50:28.133540   52245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:50:28.158043   52245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:50:28.179495   52245 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:50:28.179590   52245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:50:28.198658   52245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:50:28.222164   52245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:50:28.400621   52245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:50:28.588444   52245 docker.go:233] disabling docker service ...
	I0401 20:50:28.588508   52245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:50:28.606599   52245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:50:28.623998   52245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:50:28.780230   52245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:50:28.966970   52245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:50:28.983870   52245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:50:29.011113   52245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:50:29.011189   52245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:50:29.023748   52245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:50:29.023816   52245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:50:29.037449   52245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:50:29.051106   52245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:50:29.064200   52245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:50:29.080556   52245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:50:29.092812   52245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:50:29.106932   52245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:50:29.121476   52245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:50:29.136292   52245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:50:29.211601   52245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:50:29.566793   52245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:50:31.442990   52245 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.876158599s)
	I0401 20:50:31.443019   52245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:50:31.443069   52245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:50:31.450243   52245 start.go:563] Will wait 60s for crictl version
	I0401 20:50:31.450307   52245 ssh_runner.go:195] Run: which crictl
	I0401 20:50:31.456337   52245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:50:31.498763   52245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 20:50:31.498867   52245 ssh_runner.go:195] Run: crio --version
	I0401 20:50:31.533847   52245 ssh_runner.go:195] Run: crio --version
	I0401 20:50:31.569368   52245 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 20:50:31.570852   52245 main.go:141] libmachine: (pause-854311) Calling .GetIP
	I0401 20:50:31.574161   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:31.574619   52245 main.go:141] libmachine: (pause-854311) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:52:c8", ip: ""} in network mk-pause-854311: {Iface:virbr2 ExpiryTime:2025-04-01 21:49:42 +0000 UTC Type:0 Mac:52:54:00:3f:52:c8 Iaid: IPaddr:192.168.83.73 Prefix:24 Hostname:pause-854311 Clientid:01:52:54:00:3f:52:c8}
	I0401 20:50:31.574648   52245 main.go:141] libmachine: (pause-854311) DBG | domain pause-854311 has defined IP address 192.168.83.73 and MAC address 52:54:00:3f:52:c8 in network mk-pause-854311
	I0401 20:50:31.574912   52245 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0401 20:50:31.581203   52245 kubeadm.go:883] updating cluster {Name:pause-854311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-854311 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:50:31.581339   52245 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:50:31.581399   52245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:50:31.631614   52245 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:50:31.631648   52245 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:50:31.631739   52245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:50:31.669103   52245 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:50:31.669124   52245 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:50:31.669131   52245 kubeadm.go:934] updating node { 192.168.83.73 8443 v1.32.2 crio true true} ...
	I0401 20:50:31.669247   52245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-854311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-854311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:50:31.669336   52245 ssh_runner.go:195] Run: crio config
	I0401 20:50:31.718133   52245 cni.go:84] Creating CNI manager for ""
	I0401 20:50:31.718158   52245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:50:31.718171   52245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:50:31.718207   52245 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.73 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-854311 NodeName:pause-854311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:50:31.718382   52245 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-854311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.73"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.73"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:50:31.718453   52245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:50:31.733229   52245 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:50:31.733305   52245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:50:31.745633   52245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0401 20:50:31.765244   52245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:50:31.786426   52245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0401 20:50:31.808114   52245 ssh_runner.go:195] Run: grep 192.168.83.73	control-plane.minikube.internal$ /etc/hosts
	I0401 20:50:31.812858   52245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:50:31.950375   52245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:50:31.970882   52245 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311 for IP: 192.168.83.73
	I0401 20:50:31.970910   52245 certs.go:194] generating shared ca certs ...
	I0401 20:50:31.970930   52245 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:50:31.971099   52245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 20:50:31.971156   52245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 20:50:31.971170   52245 certs.go:256] generating profile certs ...
	I0401 20:50:31.971282   52245 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/client.key
	I0401 20:50:31.971377   52245 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/apiserver.key.370fb057
	I0401 20:50:31.971481   52245 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/proxy-client.key
	I0401 20:50:31.971654   52245 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 20:50:31.971696   52245 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 20:50:31.971709   52245 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:50:31.971750   52245 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:50:31.971783   52245 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:50:31.971828   52245 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 20:50:31.971886   52245 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:50:31.973620   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:50:32.009185   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 20:50:32.051235   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:50:32.091184   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:50:32.125096   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 20:50:32.155737   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:50:32.183562   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:50:32.211847   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/pause-854311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:50:32.238828   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 20:50:32.271276   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:50:32.299681   52245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 20:50:32.329428   52245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:50:32.349263   52245 ssh_runner.go:195] Run: openssl version
	I0401 20:50:32.357308   52245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 20:50:32.373844   52245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 20:50:32.379257   52245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 20:50:32.379326   52245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 20:50:32.386446   52245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 20:50:32.396652   52245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 20:50:32.408116   52245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 20:50:32.413153   52245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 20:50:32.413215   52245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 20:50:32.419976   52245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:50:32.434001   52245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:50:32.446866   52245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:50:32.452047   52245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:50:32.452114   52245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:50:32.458948   52245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:50:32.470303   52245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:50:32.475730   52245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:50:32.484131   52245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:50:32.492594   52245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:50:32.501375   52245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:50:32.508905   52245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:50:32.515257   52245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:50:32.523747   52245 kubeadm.go:392] StartCluster: {Name:pause-854311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-854311 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security
-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:50:32.523857   52245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:50:32.523914   52245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:50:32.576388   52245 cri.go:89] found id: "8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c"
	I0401 20:50:32.576419   52245 cri.go:89] found id: "47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf"
	I0401 20:50:32.576425   52245 cri.go:89] found id: "980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133"
	I0401 20:50:32.576431   52245 cri.go:89] found id: "a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d"
	I0401 20:50:32.576435   52245 cri.go:89] found id: "c3cd47ab9031ec1fa1c9a42e369c54f36865fb9786a42d200ed7c062bf7a0dee"
	I0401 20:50:32.576440   52245 cri.go:89] found id: "91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7"
	I0401 20:50:32.576444   52245 cri.go:89] found id: "0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315"
	I0401 20:50:32.576448   52245 cri.go:89] found id: ""
	I0401 20:50:32.576499   52245 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-854311 -n pause-854311
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-854311 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-854311 logs -n 25: (1.389710151s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC | 01 Apr 25 20:46 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC | 01 Apr 25 20:47 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:47 UTC |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-838550         | offline-crio-838550       | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:49 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-881088   | kubernetes-upgrade-881088 | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:49 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-877059      | minikube                  | jenkins | v1.26.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:49 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:49 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-838550         | offline-crio-838550       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:49 UTC |
	| start   | -p pause-854311 --memory=2048  | pause-854311              | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:50 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:49 UTC |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:50 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-877059      | running-upgrade-877059    | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-850365 sudo    | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| start   | -p pause-854311                | pause-854311              | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-850365 sudo    | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	| start   | -p force-systemd-env-818542    | force-systemd-env-818542  | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC |                     |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:50:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:50:51.001456   52720 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:50:51.001733   52720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:50:51.001745   52720 out.go:358] Setting ErrFile to fd 2...
	I0401 20:50:51.001749   52720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:50:51.001949   52720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:50:51.002551   52720 out.go:352] Setting JSON to false
	I0401 20:50:51.003601   52720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5595,"bootTime":1743535056,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:50:51.003663   52720 start.go:139] virtualization: kvm guest
	I0401 20:50:51.005804   52720 out.go:177] * [force-systemd-env-818542] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:50:51.007098   52720 notify.go:220] Checking for updates...
	I0401 20:50:51.007117   52720 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:50:51.008419   52720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:50:51.009904   52720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:50:51.011195   52720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:50:51.012783   52720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:50:51.014373   52720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0401 20:50:51.015984   52720 config.go:182] Loaded profile config "kubernetes-upgrade-881088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:50:51.016099   52720 config.go:182] Loaded profile config "pause-854311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:50:51.016196   52720 config.go:182] Loaded profile config "running-upgrade-877059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0401 20:50:51.016298   52720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:50:51.054200   52720 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 20:50:51.055747   52720 start.go:297] selected driver: kvm2
	I0401 20:50:51.055768   52720 start.go:901] validating driver "kvm2" against <nil>
	I0401 20:50:51.055790   52720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:50:51.056541   52720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:50:51.056630   52720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 20:50:51.073600   52720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 20:50:51.073649   52720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:50:51.074007   52720 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 20:50:51.074050   52720 cni.go:84] Creating CNI manager for ""
	I0401 20:50:51.074117   52720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:50:51.074132   52720 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 20:50:51.074191   52720 start.go:340] cluster config:
	{Name:force-systemd-env-818542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:force-systemd-env-818542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:50:51.074340   52720 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:50:51.076247   52720 out.go:177] * Starting "force-systemd-env-818542" primary control-plane node in "force-systemd-env-818542" cluster
	I0401 20:50:51.077604   52720 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:50:51.077642   52720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:50:51.077658   52720 cache.go:56] Caching tarball of preloaded images
	I0401 20:50:51.077734   52720 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:50:51.077747   52720 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:50:51.077829   52720 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/force-systemd-env-818542/config.json ...
	I0401 20:50:51.077845   52720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/force-systemd-env-818542/config.json: {Name:mkd7a89da1b6548c562f66657759c49af660e660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:50:51.077964   52720 start.go:360] acquireMachinesLock for force-systemd-env-818542: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 20:50:51.077992   52720 start.go:364] duration metric: took 15.282µs to acquireMachinesLock for "force-systemd-env-818542"
	I0401 20:50:51.078005   52720 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-818542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.
2 ClusterName:force-systemd-env-818542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:50:51.078045   52720 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 20:50:52.903864   52245 pod_ready.go:93] pod "etcd-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.903889   52245 pod_ready.go:82] duration metric: took 4.508063117s for pod "etcd-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.903900   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.909036   52245 pod_ready.go:93] pod "kube-apiserver-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.909061   52245 pod_ready.go:82] duration metric: took 5.152939ms for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.909070   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.914050   52245 pod_ready.go:93] pod "kube-controller-manager-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.914072   52245 pod_ready.go:82] duration metric: took 4.995179ms for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.914084   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.917926   52245 pod_ready.go:93] pod "kube-proxy-9tqpq" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.917953   52245 pod_ready.go:82] duration metric: took 3.860807ms for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.917965   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.922331   52245 pod_ready.go:93] pod "kube-scheduler-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.922353   52245 pod_ready.go:82] duration metric: took 4.381239ms for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.922364   52245 pod_ready.go:39] duration metric: took 12.040518015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:50:52.922384   52245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:50:52.936535   52245 ops.go:34] apiserver oom_adj: -16
	I0401 20:50:52.936559   52245 kubeadm.go:597] duration metric: took 20.299547463s to restartPrimaryControlPlane
	I0401 20:50:52.936568   52245 kubeadm.go:394] duration metric: took 20.412829628s to StartCluster
	I0401 20:50:52.936588   52245 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:50:52.936681   52245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:50:52.937480   52245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:50:52.937736   52245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:50:52.937841   52245 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:50:52.937984   52245 config.go:182] Loaded profile config "pause-854311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:50:52.939304   52245 out.go:177] * Verifying Kubernetes components...
	I0401 20:50:52.939317   52245 out.go:177] * Enabled addons: 
	I0401 20:50:51.862744   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:51.862964   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:50.351417   51684 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:50:50.351445   51684 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:50:50.351458   51684 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0401 20:50:52.357715   51684 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:50:52.357742   51684 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:50:52.357761   51684 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0401 20:50:54.364682   51684 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:50:54.364720   51684 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:50:54.364738   51684 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0401 20:50:52.940499   52245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:50:52.940502   52245 addons.go:514] duration metric: took 2.671334ms for enable addons: enabled=[]
	I0401 20:50:53.110111   52245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:50:53.127757   52245 node_ready.go:35] waiting up to 6m0s for node "pause-854311" to be "Ready" ...
	I0401 20:50:53.130661   52245 node_ready.go:49] node "pause-854311" has status "Ready":"True"
	I0401 20:50:53.130681   52245 node_ready.go:38] duration metric: took 2.893977ms for node "pause-854311" to be "Ready" ...
	I0401 20:50:53.130689   52245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:50:53.300826   52245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gzdq9" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:53.700391   52245 pod_ready.go:93] pod "coredns-668d6bf9bc-gzdq9" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:53.700424   52245 pod_ready.go:82] duration metric: took 399.571945ms for pod "coredns-668d6bf9bc-gzdq9" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:53.700440   52245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.099987   52245 pod_ready.go:93] pod "etcd-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:54.100016   52245 pod_ready.go:82] duration metric: took 399.567597ms for pod "etcd-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.100025   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.501079   52245 pod_ready.go:93] pod "kube-apiserver-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:54.501101   52245 pod_ready.go:82] duration metric: took 401.069982ms for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.501111   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.900845   52245 pod_ready.go:93] pod "kube-controller-manager-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:54.900873   52245 pod_ready.go:82] duration metric: took 399.753846ms for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.900887   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:55.300539   52245 pod_ready.go:93] pod "kube-proxy-9tqpq" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:55.300560   52245 pod_ready.go:82] duration metric: took 399.66551ms for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:55.300569   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:51.079765   52720 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0401 20:50:51.079899   52720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:50:51.079939   52720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:50:51.094608   52720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0401 20:50:51.095079   52720 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:50:51.095609   52720 main.go:141] libmachine: Using API Version  1
	I0401 20:50:51.095628   52720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:50:51.096017   52720 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:50:51.096241   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .GetMachineName
	I0401 20:50:51.096383   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .DriverName
	I0401 20:50:51.096514   52720 start.go:159] libmachine.API.Create for "force-systemd-env-818542" (driver="kvm2")
	I0401 20:50:51.096541   52720 client.go:168] LocalClient.Create starting
	I0401 20:50:51.096581   52720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 20:50:51.096621   52720 main.go:141] libmachine: Decoding PEM data...
	I0401 20:50:51.096645   52720 main.go:141] libmachine: Parsing certificate...
	I0401 20:50:51.096718   52720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 20:50:51.096749   52720 main.go:141] libmachine: Decoding PEM data...
	I0401 20:50:51.096770   52720 main.go:141] libmachine: Parsing certificate...
	I0401 20:50:51.096790   52720 main.go:141] libmachine: Running pre-create checks...
	I0401 20:50:51.096806   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .PreCreateCheck
	I0401 20:50:51.097097   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .GetConfigRaw
	I0401 20:50:51.097547   52720 main.go:141] libmachine: Creating machine...
	I0401 20:50:51.097574   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .Create
	I0401 20:50:51.097702   52720 main.go:141] libmachine: (force-systemd-env-818542) creating KVM machine...
	I0401 20:50:51.097721   52720 main.go:141] libmachine: (force-systemd-env-818542) creating network...
	I0401 20:50:51.098982   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | found existing default KVM network
	I0401 20:50:51.100048   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.099894   52759 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:c4:50} reservation:<nil>}
	I0401 20:50:51.101271   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.101173   52759 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201a20}
	I0401 20:50:51.101289   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | created network xml: 
	I0401 20:50:51.101302   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | <network>
	I0401 20:50:51.101315   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   <name>mk-force-systemd-env-818542</name>
	I0401 20:50:51.101329   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   <dns enable='no'/>
	I0401 20:50:51.101340   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   
	I0401 20:50:51.101355   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0401 20:50:51.101369   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |     <dhcp>
	I0401 20:50:51.101396   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0401 20:50:51.101420   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |     </dhcp>
	I0401 20:50:51.101435   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   </ip>
	I0401 20:50:51.101446   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   
	I0401 20:50:51.101472   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | </network>
	I0401 20:50:51.101482   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | 
	I0401 20:50:51.106960   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | trying to create private KVM network mk-force-systemd-env-818542 192.168.50.0/24...
	I0401 20:50:51.177974   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | private KVM network mk-force-systemd-env-818542 192.168.50.0/24 created
	I0401 20:50:51.178014   52720 main.go:141] libmachine: (force-systemd-env-818542) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542 ...
	I0401 20:50:51.178031   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.177956   52759 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:50:51.178048   52720 main.go:141] libmachine: (force-systemd-env-818542) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 20:50:51.178076   52720 main.go:141] libmachine: (force-systemd-env-818542) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 20:50:51.415934   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.415772   52759 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/id_rsa...
	I0401 20:50:51.746673   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.746505   52759 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/force-systemd-env-818542.rawdisk...
	I0401 20:50:51.746708   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | Writing magic tar header
	I0401 20:50:51.746769   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | Writing SSH key tar header
	I0401 20:50:51.746799   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542 (perms=drwx------)
	I0401 20:50:51.746815   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.746649   52759 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542 ...
	I0401 20:50:51.746847   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 20:50:51.746869   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542
	I0401 20:50:51.746883   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 20:50:51.746899   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 20:50:51.746916   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 20:50:51.746926   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 20:50:51.746938   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 20:50:51.746949   52720 main.go:141] libmachine: (force-systemd-env-818542) creating domain...
	I0401 20:50:51.746968   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:50:51.746980   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 20:50:51.747002   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 20:50:51.747024   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins
	I0401 20:50:51.747047   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home
	I0401 20:50:51.747059   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | skipping /home - not owner
	I0401 20:50:51.748112   52720 main.go:141] libmachine: (force-systemd-env-818542) define libvirt domain using xml: 
	I0401 20:50:51.748129   52720 main.go:141] libmachine: (force-systemd-env-818542) <domain type='kvm'>
	I0401 20:50:51.748139   52720 main.go:141] libmachine: (force-systemd-env-818542)   <name>force-systemd-env-818542</name>
	I0401 20:50:51.748146   52720 main.go:141] libmachine: (force-systemd-env-818542)   <memory unit='MiB'>2048</memory>
	I0401 20:50:51.748154   52720 main.go:141] libmachine: (force-systemd-env-818542)   <vcpu>2</vcpu>
	I0401 20:50:51.748172   52720 main.go:141] libmachine: (force-systemd-env-818542)   <features>
	I0401 20:50:51.748184   52720 main.go:141] libmachine: (force-systemd-env-818542)     <acpi/>
	I0401 20:50:51.748196   52720 main.go:141] libmachine: (force-systemd-env-818542)     <apic/>
	I0401 20:50:51.748204   52720 main.go:141] libmachine: (force-systemd-env-818542)     <pae/>
	I0401 20:50:51.748213   52720 main.go:141] libmachine: (force-systemd-env-818542)     
	I0401 20:50:51.748218   52720 main.go:141] libmachine: (force-systemd-env-818542)   </features>
	I0401 20:50:51.748230   52720 main.go:141] libmachine: (force-systemd-env-818542)   <cpu mode='host-passthrough'>
	I0401 20:50:51.748238   52720 main.go:141] libmachine: (force-systemd-env-818542)   
	I0401 20:50:51.748242   52720 main.go:141] libmachine: (force-systemd-env-818542)   </cpu>
	I0401 20:50:51.748249   52720 main.go:141] libmachine: (force-systemd-env-818542)   <os>
	I0401 20:50:51.748256   52720 main.go:141] libmachine: (force-systemd-env-818542)     <type>hvm</type>
	I0401 20:50:51.748275   52720 main.go:141] libmachine: (force-systemd-env-818542)     <boot dev='cdrom'/>
	I0401 20:50:51.748286   52720 main.go:141] libmachine: (force-systemd-env-818542)     <boot dev='hd'/>
	I0401 20:50:51.748311   52720 main.go:141] libmachine: (force-systemd-env-818542)     <bootmenu enable='no'/>
	I0401 20:50:51.748329   52720 main.go:141] libmachine: (force-systemd-env-818542)   </os>
	I0401 20:50:51.748340   52720 main.go:141] libmachine: (force-systemd-env-818542)   <devices>
	I0401 20:50:51.748352   52720 main.go:141] libmachine: (force-systemd-env-818542)     <disk type='file' device='cdrom'>
	I0401 20:50:51.748371   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/boot2docker.iso'/>
	I0401 20:50:51.748387   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target dev='hdc' bus='scsi'/>
	I0401 20:50:51.748400   52720 main.go:141] libmachine: (force-systemd-env-818542)       <readonly/>
	I0401 20:50:51.748411   52720 main.go:141] libmachine: (force-systemd-env-818542)     </disk>
	I0401 20:50:51.748444   52720 main.go:141] libmachine: (force-systemd-env-818542)     <disk type='file' device='disk'>
	I0401 20:50:51.748461   52720 main.go:141] libmachine: (force-systemd-env-818542)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 20:50:51.748476   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/force-systemd-env-818542.rawdisk'/>
	I0401 20:50:51.748487   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target dev='hda' bus='virtio'/>
	I0401 20:50:51.748497   52720 main.go:141] libmachine: (force-systemd-env-818542)     </disk>
	I0401 20:50:51.748507   52720 main.go:141] libmachine: (force-systemd-env-818542)     <interface type='network'>
	I0401 20:50:51.748518   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source network='mk-force-systemd-env-818542'/>
	I0401 20:50:51.748534   52720 main.go:141] libmachine: (force-systemd-env-818542)       <model type='virtio'/>
	I0401 20:50:51.748546   52720 main.go:141] libmachine: (force-systemd-env-818542)     </interface>
	I0401 20:50:51.748557   52720 main.go:141] libmachine: (force-systemd-env-818542)     <interface type='network'>
	I0401 20:50:51.748570   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source network='default'/>
	I0401 20:50:51.748582   52720 main.go:141] libmachine: (force-systemd-env-818542)       <model type='virtio'/>
	I0401 20:50:51.748605   52720 main.go:141] libmachine: (force-systemd-env-818542)     </interface>
	I0401 20:50:51.748621   52720 main.go:141] libmachine: (force-systemd-env-818542)     <serial type='pty'>
	I0401 20:50:51.748633   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target port='0'/>
	I0401 20:50:51.748644   52720 main.go:141] libmachine: (force-systemd-env-818542)     </serial>
	I0401 20:50:51.748657   52720 main.go:141] libmachine: (force-systemd-env-818542)     <console type='pty'>
	I0401 20:50:51.748668   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target type='serial' port='0'/>
	I0401 20:50:51.748678   52720 main.go:141] libmachine: (force-systemd-env-818542)     </console>
	I0401 20:50:51.748693   52720 main.go:141] libmachine: (force-systemd-env-818542)     <rng model='virtio'>
	I0401 20:50:51.748707   52720 main.go:141] libmachine: (force-systemd-env-818542)       <backend model='random'>/dev/random</backend>
	I0401 20:50:51.748717   52720 main.go:141] libmachine: (force-systemd-env-818542)     </rng>
	I0401 20:50:51.748727   52720 main.go:141] libmachine: (force-systemd-env-818542)     
	I0401 20:50:51.748736   52720 main.go:141] libmachine: (force-systemd-env-818542)     
	I0401 20:50:51.748745   52720 main.go:141] libmachine: (force-systemd-env-818542)   </devices>
	I0401 20:50:51.748762   52720 main.go:141] libmachine: (force-systemd-env-818542) </domain>
	I0401 20:50:51.748772   52720 main.go:141] libmachine: (force-systemd-env-818542) 
	I0401 20:50:51.752967   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:52:17:9b in network default
	I0401 20:50:51.753537   52720 main.go:141] libmachine: (force-systemd-env-818542) starting domain...
	I0401 20:50:51.753557   52720 main.go:141] libmachine: (force-systemd-env-818542) ensuring networks are active...
	I0401 20:50:51.753582   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:51.754259   52720 main.go:141] libmachine: (force-systemd-env-818542) Ensuring network default is active
	I0401 20:50:51.754611   52720 main.go:141] libmachine: (force-systemd-env-818542) Ensuring network mk-force-systemd-env-818542 is active
	I0401 20:50:51.755194   52720 main.go:141] libmachine: (force-systemd-env-818542) getting domain XML...
	I0401 20:50:51.755996   52720 main.go:141] libmachine: (force-systemd-env-818542) creating domain...
	I0401 20:50:53.004542   52720 main.go:141] libmachine: (force-systemd-env-818542) waiting for IP...
	I0401 20:50:53.005503   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:53.006090   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:53.006146   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:53.006078   52759 retry.go:31] will retry after 238.360538ms: waiting for domain to come up
	I0401 20:50:53.246711   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:53.247369   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:53.247401   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:53.247311   52759 retry.go:31] will retry after 378.94785ms: waiting for domain to come up
	I0401 20:50:53.627928   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:53.628417   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:53.628440   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:53.628378   52759 retry.go:31] will retry after 474.609475ms: waiting for domain to come up
	I0401 20:50:54.105074   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:54.105633   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:54.105661   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:54.105598   52759 retry.go:31] will retry after 402.97083ms: waiting for domain to come up
	I0401 20:50:54.510323   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:54.510817   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:54.510857   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:54.510812   52759 retry.go:31] will retry after 705.269755ms: waiting for domain to come up
	I0401 20:50:55.218477   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:55.218964   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:55.218998   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:55.218914   52759 retry.go:31] will retry after 798.06074ms: waiting for domain to come up
	I0401 20:50:55.701159   52245 pod_ready.go:93] pod "kube-scheduler-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:55.701184   52245 pod_ready.go:82] duration metric: took 400.609485ms for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:55.701192   52245 pod_ready.go:39] duration metric: took 2.570493716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:50:55.701206   52245 api_server.go:52] waiting for apiserver process to appear ...
	I0401 20:50:55.701262   52245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:50:55.717047   52245 api_server.go:72] duration metric: took 2.779277384s to wait for apiserver process to appear ...
	I0401 20:50:55.717077   52245 api_server.go:88] waiting for apiserver healthz status ...
	I0401 20:50:55.717095   52245 api_server.go:253] Checking apiserver healthz at https://192.168.83.73:8443/healthz ...
	I0401 20:50:55.722955   52245 api_server.go:279] https://192.168.83.73:8443/healthz returned 200:
	ok
	I0401 20:50:55.724073   52245 api_server.go:141] control plane version: v1.32.2
	I0401 20:50:55.724092   52245 api_server.go:131] duration metric: took 7.009068ms to wait for apiserver health ...
	I0401 20:50:55.724100   52245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 20:50:55.900325   52245 system_pods.go:59] 6 kube-system pods found
	I0401 20:50:55.900351   52245 system_pods.go:61] "coredns-668d6bf9bc-gzdq9" [f6050659-c76a-4a4d-8993-cd155122c2ca] Running
	I0401 20:50:55.900368   52245 system_pods.go:61] "etcd-pause-854311" [cd2cb077-bc5e-439e-b9e5-6b30256b863c] Running
	I0401 20:50:55.900372   52245 system_pods.go:61] "kube-apiserver-pause-854311" [84f99deb-1137-4aa0-9487-60e6a24c0855] Running
	I0401 20:50:55.900375   52245 system_pods.go:61] "kube-controller-manager-pause-854311" [0ebd174b-9608-4ee4-86f7-9239b3086751] Running
	I0401 20:50:55.900378   52245 system_pods.go:61] "kube-proxy-9tqpq" [5ed694bf-68e5-4bc0-9fbe-8df6e74dc624] Running
	I0401 20:50:55.900380   52245 system_pods.go:61] "kube-scheduler-pause-854311" [e53549c5-3e7b-499d-ba8c-731cca4d0ba3] Running
	I0401 20:50:55.900386   52245 system_pods.go:74] duration metric: took 176.281978ms to wait for pod list to return data ...
	I0401 20:50:55.900394   52245 default_sa.go:34] waiting for default service account to be created ...
	I0401 20:50:56.099504   52245 default_sa.go:45] found service account: "default"
	I0401 20:50:56.099544   52245 default_sa.go:55] duration metric: took 199.143021ms for default service account to be created ...
	I0401 20:50:56.099556   52245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 20:50:56.300920   52245 system_pods.go:86] 6 kube-system pods found
	I0401 20:50:56.300964   52245 system_pods.go:89] "coredns-668d6bf9bc-gzdq9" [f6050659-c76a-4a4d-8993-cd155122c2ca] Running
	I0401 20:50:56.300974   52245 system_pods.go:89] "etcd-pause-854311" [cd2cb077-bc5e-439e-b9e5-6b30256b863c] Running
	I0401 20:50:56.300982   52245 system_pods.go:89] "kube-apiserver-pause-854311" [84f99deb-1137-4aa0-9487-60e6a24c0855] Running
	I0401 20:50:56.300989   52245 system_pods.go:89] "kube-controller-manager-pause-854311" [0ebd174b-9608-4ee4-86f7-9239b3086751] Running
	I0401 20:50:56.300995   52245 system_pods.go:89] "kube-proxy-9tqpq" [5ed694bf-68e5-4bc0-9fbe-8df6e74dc624] Running
	I0401 20:50:56.301002   52245 system_pods.go:89] "kube-scheduler-pause-854311" [e53549c5-3e7b-499d-ba8c-731cca4d0ba3] Running
	I0401 20:50:56.301016   52245 system_pods.go:126] duration metric: took 201.452979ms to wait for k8s-apps to be running ...
	I0401 20:50:56.301031   52245 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 20:50:56.301086   52245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:50:56.317527   52245 system_svc.go:56] duration metric: took 16.48563ms WaitForService to wait for kubelet
	I0401 20:50:56.317577   52245 kubeadm.go:582] duration metric: took 3.379814128s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:50:56.317599   52245 node_conditions.go:102] verifying NodePressure condition ...
	I0401 20:50:56.499909   52245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 20:50:56.499932   52245 node_conditions.go:123] node cpu capacity is 2
	I0401 20:50:56.499946   52245 node_conditions.go:105] duration metric: took 182.340888ms to run NodePressure ...
	I0401 20:50:56.499960   52245 start.go:241] waiting for startup goroutines ...
	I0401 20:50:56.499969   52245 start.go:246] waiting for cluster config update ...
	I0401 20:50:56.499980   52245 start.go:255] writing updated cluster config ...
	I0401 20:50:56.500267   52245 ssh_runner.go:195] Run: rm -f paused
	I0401 20:50:56.552086   52245 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 20:50:56.555291   52245 out.go:177] * Done! kubectl is now configured to use "pause-854311" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.211614567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540657211589260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79e027f0-4699-4b90-9392-57dc84b549c0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.212215275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c172876-d4d9-4291-b3f2-67997903ca67 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.212320934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c172876-d4d9-4291-b3f2-67997903ca67 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.212606680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c172876-d4d9-4291-b3f2-67997903ca67 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.258648553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7c2aa54-3da3-4d5e-975c-e08ebf004c4f name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.258768044Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7c2aa54-3da3-4d5e-975c-e08ebf004c4f name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.262918519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b7a06ba-378f-4777-9152-6499325fc211 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.263580212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540657263549375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b7a06ba-378f-4777-9152-6499325fc211 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.264753402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b56d104-f8de-4a97-8743-1c5ed352ca57 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.264919363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b56d104-f8de-4a97-8743-1c5ed352ca57 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.265486238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b56d104-f8de-4a97-8743-1c5ed352ca57 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.309797789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed044303-7fd3-4918-87ad-e1b220b1524c name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.309873028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed044303-7fd3-4918-87ad-e1b220b1524c name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.311584725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fb3288b-30d9-41ad-961d-7e72621be126 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.311947763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540657311925225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fb3288b-30d9-41ad-961d-7e72621be126 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.312527496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1381a738-40c8-4842-98f5-7abaf052c2d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.312594049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1381a738-40c8-4842-98f5-7abaf052c2d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.312826615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1381a738-40c8-4842-98f5-7abaf052c2d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.364006812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=445531e3-1f49-4fda-9171-b529d5462e55 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.364083883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=445531e3-1f49-4fda-9171-b529d5462e55 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.365742862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4485338-9349-48f2-8658-781d347ee7cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.366220870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540657366194941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4485338-9349-48f2-8658-781d347ee7cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.367863892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f605a31a-ff05-4858-b74b-7360c7b38da8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.367921303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f605a31a-ff05-4858-b74b-7360c7b38da8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:57 pause-854311 crio[2364]: time="2025-04-01 20:50:57.368226671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f605a31a-ff05-4858-b74b-7360c7b38da8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6565e7a5d03e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago      Running             coredns                   1                   8daea8b16c3a4       coredns-668d6bf9bc-gzdq9
	55d6822c5af0a       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   17 seconds ago      Running             kube-proxy                1                   4953e752dbeeb       kube-proxy-9tqpq
	816d76695fcb8       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   22 seconds ago      Running             kube-scheduler            2                   87a792d3b6a4a       kube-scheduler-pause-854311
	8bde402d68f35       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   22 seconds ago      Running             etcd                      1                   d68e06b3bbf0b       etcd-pause-854311
	207d3beaf5233       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   22 seconds ago      Running             kube-apiserver            1                   dd316d1c22d11       kube-apiserver-pause-854311
	7297d75181efd       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   22 seconds ago      Running             kube-controller-manager   1                   a5150c2dac57a       kube-controller-manager-pause-854311
	8692daa04dec4       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   27 seconds ago      Exited              kube-scheduler            1                   2fb90e16f6a2e       kube-scheduler-pause-854311
	47d974cf94529       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   43 seconds ago      Exited              coredns                   0                   aa75d439623f8       coredns-668d6bf9bc-gzdq9
	980d18164dbf2       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   43 seconds ago      Exited              kube-proxy                0                   3a7633f26f654       kube-proxy-9tqpq
	a6b36114da1c5       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   54 seconds ago      Exited              kube-controller-manager   0                   4e315b7b9873c       kube-controller-manager-pause-854311
	91afd2984825c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   54 seconds ago      Exited              etcd                      0                   1065b5bcae2d9       etcd-pause-854311
	0d57ab471258d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   54 seconds ago      Exited              kube-apiserver            0                   abd463b97fb57       kube-apiserver-pause-854311
	
	
	==> coredns [47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57772 - 24853 "HINFO IN 3240051374241671829.1070327449255104046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032883881s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49140 - 28287 "HINFO IN 7333211155852405535.2950304355677977995. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033649413s
	
	
	==> describe nodes <==
	Name:               pause-854311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-854311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=pause-854311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_50_09_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-854311
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:50:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.73
	  Hostname:    pause-854311
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 21aa8da5f9d84ade9ab07ff7e5125a73
	  System UUID:                21aa8da5-f9d8-4ade-9ab0-7ff7e5125a73
	  Boot ID:                    9c0a3895-9dba-4ad5-be55-ef1496052e35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-gzdq9                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     44s
	  kube-system                 etcd-pause-854311                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         48s
	  kube-system                 kube-apiserver-pause-854311             250m (12%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-controller-manager-pause-854311    200m (10%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-proxy-9tqpq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-scheduler-pause-854311             100m (5%)     0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node pause-854311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node pause-854311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x7 over 55s)  kubelet          Node pause-854311 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s                kubelet          Node pause-854311 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    48s                kubelet          Node pause-854311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s                kubelet          Node pause-854311 status is now: NodeHasSufficientPID
	  Normal  NodeReady                48s                kubelet          Node pause-854311 status is now: NodeReady
	  Normal  RegisteredNode           45s                node-controller  Node pause-854311 event: Registered Node pause-854311 in Controller
	  Normal  CIDRAssignmentFailed     45s                cidrAllocator    Node pause-854311 status is now: CIDRAssignmentFailed
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-854311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-854311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-854311 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-854311 event: Registered Node pause-854311 in Controller
	
	
	==> dmesg <==
	[  +0.059188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074324] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.196543] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.148810] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.314116] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +4.870459] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +0.059076] kauditd_printk_skb: 130 callbacks suppressed
	[Apr 1 20:50] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.528004] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.061194] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.087395] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.340637] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.858105] kauditd_printk_skb: 43 callbacks suppressed
	[ +14.377700] systemd-fstab-generator[2072]: Ignoring "noauto" option for root device
	[  +0.082132] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.098011] systemd-fstab-generator[2085]: Ignoring "noauto" option for root device
	[  +0.198814] systemd-fstab-generator[2098]: Ignoring "noauto" option for root device
	[  +0.167847] systemd-fstab-generator[2110]: Ignoring "noauto" option for root device
	[  +0.487952] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +2.535713] systemd-fstab-generator[2471]: Ignoring "noauto" option for root device
	[  +2.463660] systemd-fstab-generator[2594]: Ignoring "noauto" option for root device
	[  +0.086794] kauditd_printk_skb: 153 callbacks suppressed
	[  +5.516789] kauditd_printk_skb: 54 callbacks suppressed
	[  +8.255212] kauditd_printk_skb: 21 callbacks suppressed
	[  +4.815774] systemd-fstab-generator[3309]: Ignoring "noauto" option for root device
	
	
	==> etcd [8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d] <==
	{"level":"info","ts":"2025-04-01T20:50:35.835669Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b8c5aec97845a467","local-member-id":"18962caea3d343e5","added-peer-id":"18962caea3d343e5","added-peer-peer-urls":["https://192.168.83.73:2380"]}
	{"level":"info","ts":"2025-04-01T20:50:35.835752Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b8c5aec97845a467","local-member-id":"18962caea3d343e5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:35.835795Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:35.841390Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:35.845182Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-01T20:50:35.845475Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"18962caea3d343e5","initial-advertise-peer-urls":["https://192.168.83.73:2380"],"listen-peer-urls":["https://192.168.83.73:2380"],"advertise-client-urls":["https://192.168.83.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:50:35.845521Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:50:35.845599Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:35.845622Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:37.115431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:37.115604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:37.115675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 received MsgPreVoteResp from 18962caea3d343e5 at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:37.115731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.115760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 received MsgVoteResp from 18962caea3d343e5 at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.115790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.115818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18962caea3d343e5 elected leader 18962caea3d343e5 at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.120868Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"18962caea3d343e5","local-member-attributes":"{Name:pause-854311 ClientURLs:[https://192.168.83.73:2379]}","request-path":"/0/members/18962caea3d343e5/attributes","cluster-id":"b8c5aec97845a467","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:50:37.121365Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:37.122757Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:37.123855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.73:2379"}
	{"level":"info","ts":"2025-04-01T20:50:37.124742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:37.125622Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:37.127230Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:50:37.125744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:50:37.137066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7] <==
	{"level":"info","ts":"2025-04-01T20:50:03.928178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:03.928185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18962caea3d343e5 elected leader 18962caea3d343e5 at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:03.932210Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.934397Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"18962caea3d343e5","local-member-attributes":"{Name:pause-854311 ClientURLs:[https://192.168.83.73:2379]}","request-path":"/0/members/18962caea3d343e5/attributes","cluster-id":"b8c5aec97845a467","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:50:03.934449Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:03.934818Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:03.935138Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b8c5aec97845a467","local-member-id":"18962caea3d343e5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.935254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.935310Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.935751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:03.940545Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:50:03.945342Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:03.950215Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.73:2379"}
	{"level":"info","ts":"2025-04-01T20:50:03.967050Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:50:03.975011Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:50:21.681665Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-01T20:50:21.681733Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-854311","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.73:2380"],"advertise-client-urls":["https://192.168.83.73:2379"]}
	{"level":"warn","ts":"2025-04-01T20:50:21.681857Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:50:21.681941Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:50:21.711621Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.73:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:50:21.711799Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.73:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-01T20:50:21.712140Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"18962caea3d343e5","current-leader-member-id":"18962caea3d343e5"}
	{"level":"info","ts":"2025-04-01T20:50:21.716528Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:21.716754Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:21.716856Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-854311","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.73:2380"],"advertise-client-urls":["https://192.168.83.73:2379"]}
	
	
	==> kernel <==
	 20:50:57 up 1 min,  0 users,  load average: 1.33, 0.39, 0.13
	Linux pause-854311 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315] <==
	I0401 20:50:06.958241       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:50:06.967174       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:50:06.967213       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:50:07.795520       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:50:07.884853       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:50:08.019659       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:50:08.027468       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.73]
	I0401 20:50:08.028721       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:50:08.033863       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:50:08.046773       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:50:08.967556       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:50:08.982185       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:50:09.000233       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:50:13.451202       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:50:13.504050       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0401 20:50:21.677751       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0401 20:50:21.691679       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.691789       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.691839       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.691925       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.692658       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.692904       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.695451       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.696063       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.697622       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c] <==
	E0401 20:50:38.754119       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 20:50:38.765080       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0401 20:50:38.788917       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0401 20:50:38.789776       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0401 20:50:38.789875       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0401 20:50:38.790227       1 shared_informer.go:320] Caches are synced for configmaps
	I0401 20:50:38.794056       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:50:38.794125       1 policy_source.go:240] refreshing policies
	I0401 20:50:38.795068       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 20:50:38.807394       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0401 20:50:38.808406       1 aggregator.go:171] initial CRD sync complete...
	I0401 20:50:38.808456       1 autoregister_controller.go:144] Starting autoregister controller
	I0401 20:50:38.808465       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:50:38.808470       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:50:38.816787       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0401 20:50:38.848115       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:50:39.604656       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:50:39.747152       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:50:40.643606       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:50:40.730654       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:50:40.806655       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:50:40.838810       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:50:42.136036       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:50:42.284591       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:50:48.190504       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8] <==
	I0401 20:50:41.982269       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0401 20:50:41.982041       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0401 20:50:41.982056       1 shared_informer.go:320] Caches are synced for disruption
	I0401 20:50:41.982063       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0401 20:50:41.984504       1 shared_informer.go:320] Caches are synced for deployment
	I0401 20:50:41.984619       1 shared_informer.go:320] Caches are synced for node
	I0401 20:50:41.984751       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0401 20:50:41.984938       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0401 20:50:41.985024       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:50:41.985046       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:50:41.985223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:41.990069       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0401 20:50:41.990303       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:50:42.000317       1 shared_informer.go:320] Caches are synced for HPA
	I0401 20:50:42.003741       1 shared_informer.go:320] Caches are synced for expand
	I0401 20:50:42.008212       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0401 20:50:42.017528       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:50:42.027050       1 shared_informer.go:320] Caches are synced for namespace
	I0401 20:50:42.029585       1 shared_informer.go:320] Caches are synced for PV protection
	I0401 20:50:42.030742       1 shared_informer.go:320] Caches are synced for endpoint
	I0401 20:50:42.033264       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:50:42.040660       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0401 20:50:42.043129       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:50:48.199581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="37.118801ms"
	I0401 20:50:48.202830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="244.05µs"
	
	
	==> kube-controller-manager [a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d] <==
	I0401 20:50:12.647400       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:50:12.647692       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0401 20:50:12.648900       1 shared_informer.go:320] Caches are synced for GC
	I0401 20:50:12.649236       1 shared_informer.go:320] Caches are synced for stateful set
	I0401 20:50:12.649285       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0401 20:50:12.650440       1 shared_informer.go:320] Caches are synced for PV protection
	I0401 20:50:12.657099       1 shared_informer.go:320] Caches are synced for crt configmap
	E0401 20:50:12.659563       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"pause-854311\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-854311" podCIDRs=["10.244.1.0/24"]
	E0401 20:50:12.659786       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"pause-854311\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-854311"
	E0401 20:50:12.661664       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'pause-854311': failed to patch node CIDR: Node \"pause-854311\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0401 20:50:12.661813       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:12.663084       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:50:12.665058       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:50:12.667453       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:12.907020       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:13.769084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="303.592273ms"
	I0401 20:50:13.796863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="27.607736ms"
	I0401 20:50:13.817351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.416003ms"
	I0401 20:50:13.818054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="199.998µs"
	I0401 20:50:13.833458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="78.599µs"
	I0401 20:50:15.141297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="96.871µs"
	I0401 20:50:15.177536       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="77.007µs"
	I0401 20:50:15.188681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="44.744µs"
	I0401 20:50:15.194628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="75.896µs"
	I0401 20:50:19.357897       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	
	
	==> kube-proxy [55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0401 20:50:40.430338       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0401 20:50:40.454242       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.73"]
	E0401 20:50:40.454433       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:50:40.537267       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0401 20:50:40.537371       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 20:50:40.537406       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:50:40.540453       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:50:40.541285       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:50:40.541318       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:50:40.544479       1 config.go:199] "Starting service config controller"
	I0401 20:50:40.544530       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:50:40.544570       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:50:40.544593       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:50:40.548702       1 config.go:329] "Starting node config controller"
	I0401 20:50:40.548735       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:50:40.644749       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:50:40.644832       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:50:40.654800       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0401 20:50:14.362464       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0401 20:50:14.407094       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.73"]
	E0401 20:50:14.407576       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:50:14.514717       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0401 20:50:14.514767       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 20:50:14.514912       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:50:14.523336       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:50:14.524559       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:50:14.524867       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:50:14.528566       1 config.go:199] "Starting service config controller"
	I0401 20:50:14.528864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:50:14.529069       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:50:14.529099       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:50:14.534297       1 config.go:329] "Starting node config controller"
	I0401 20:50:14.534403       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:50:14.630010       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:50:14.630056       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:50:14.634926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78] <==
	I0401 20:50:36.115386       1 serving.go:386] Generated self-signed cert in-memory
	I0401 20:50:38.774192       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:50:38.774349       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:50:38.782728       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0401 20:50:38.783367       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0401 20:50:38.783579       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:50:38.783610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:50:38.783733       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0401 20:50:38.783826       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0401 20:50:38.785802       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:50:38.786751       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:50:38.884188       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0401 20:50:38.884417       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0401 20:50:38.884447       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c] <==
	
	
	==> kubelet <==
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.773078    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.850371    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.866247    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-854311\" already exists" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.893701    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-854311\" already exists" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.893890    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.914949    2601 kubelet_node_status.go:125] "Node was previously registered" node="pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.915332    2601 kubelet_node_status.go:79] "Successfully registered node" node="pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.915464    2601 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.917139    2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.927780    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-854311\" already exists" pod="kube-system/kube-controller-manager-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.927849    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.943937    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-854311\" already exists" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.944185    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.956169    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-854311\" already exists" pod="kube-system/etcd-pause-854311"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.195054    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: E0401 20:50:39.204305    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-854311\" already exists" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.578944    2601 apiserver.go:52] "Watching apiserver"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.673072    2601 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.742343    2601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed694bf-68e5-4bc0-9fbe-8df6e74dc624-lib-modules\") pod \"kube-proxy-9tqpq\" (UID: \"5ed694bf-68e5-4bc0-9fbe-8df6e74dc624\") " pod="kube-system/kube-proxy-9tqpq"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.742420    2601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed694bf-68e5-4bc0-9fbe-8df6e74dc624-xtables-lock\") pod \"kube-proxy-9tqpq\" (UID: \"5ed694bf-68e5-4bc0-9fbe-8df6e74dc624\") " pod="kube-system/kube-proxy-9tqpq"
	Apr 01 20:50:44 pause-854311 kubelet[2601]: E0401 20:50:44.745111    2601 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540644744684259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:50:44 pause-854311 kubelet[2601]: E0401 20:50:44.745142    2601 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540644744684259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:50:48 pause-854311 kubelet[2601]: I0401 20:50:48.140162    2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 01 20:50:54 pause-854311 kubelet[2601]: E0401 20:50:54.751238    2601 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540654750746610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:50:54 pause-854311 kubelet[2601]: E0401 20:50:54.751273    2601 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540654750746610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-854311 -n pause-854311
helpers_test.go:261: (dbg) Run:  kubectl --context pause-854311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-854311 -n pause-854311
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-854311 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-854311 logs -n 25: (1.620206814s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC | 01 Apr 25 20:46 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:46 UTC | 01 Apr 25 20:47 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-697388       | scheduled-stop-697388     | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:47 UTC |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-838550         | offline-crio-838550       | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:49 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-881088   | kubernetes-upgrade-881088 | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:49 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-877059      | minikube                  | jenkins | v1.26.0 | 01 Apr 25 20:47 UTC | 01 Apr 25 20:49 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:49 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-838550         | offline-crio-838550       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:49 UTC |
	| start   | -p pause-854311 --memory=2048  | pause-854311              | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:50 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:49 UTC |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC | 01 Apr 25 20:50 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-877059      | running-upgrade-877059    | jenkins | v1.35.0 | 01 Apr 25 20:49 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-850365 sudo    | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| start   | -p pause-854311                | pause-854311              | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	| start   | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-850365 sudo    | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-850365         | NoKubernetes-850365       | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC | 01 Apr 25 20:50 UTC |
	| start   | -p force-systemd-env-818542    | force-systemd-env-818542  | jenkins | v1.35.0 | 01 Apr 25 20:50 UTC |                     |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:50:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:50:51.001456   52720 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:50:51.001733   52720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:50:51.001745   52720 out.go:358] Setting ErrFile to fd 2...
	I0401 20:50:51.001749   52720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:50:51.001949   52720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:50:51.002551   52720 out.go:352] Setting JSON to false
	I0401 20:50:51.003601   52720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5595,"bootTime":1743535056,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:50:51.003663   52720 start.go:139] virtualization: kvm guest
	I0401 20:50:51.005804   52720 out.go:177] * [force-systemd-env-818542] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:50:51.007098   52720 notify.go:220] Checking for updates...
	I0401 20:50:51.007117   52720 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:50:51.008419   52720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:50:51.009904   52720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:50:51.011195   52720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:50:51.012783   52720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:50:51.014373   52720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0401 20:50:51.015984   52720 config.go:182] Loaded profile config "kubernetes-upgrade-881088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:50:51.016099   52720 config.go:182] Loaded profile config "pause-854311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:50:51.016196   52720 config.go:182] Loaded profile config "running-upgrade-877059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0401 20:50:51.016298   52720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:50:51.054200   52720 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 20:50:51.055747   52720 start.go:297] selected driver: kvm2
	I0401 20:50:51.055768   52720 start.go:901] validating driver "kvm2" against <nil>
	I0401 20:50:51.055790   52720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:50:51.056541   52720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:50:51.056630   52720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 20:50:51.073600   52720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 20:50:51.073649   52720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:50:51.074007   52720 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 20:50:51.074050   52720 cni.go:84] Creating CNI manager for ""
	I0401 20:50:51.074117   52720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:50:51.074132   52720 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 20:50:51.074191   52720 start.go:340] cluster config:
	{Name:force-systemd-env-818542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:force-systemd-env-818542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:50:51.074340   52720 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:50:51.076247   52720 out.go:177] * Starting "force-systemd-env-818542" primary control-plane node in "force-systemd-env-818542" cluster
	I0401 20:50:51.077604   52720 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:50:51.077642   52720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:50:51.077658   52720 cache.go:56] Caching tarball of preloaded images
	I0401 20:50:51.077734   52720 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:50:51.077747   52720 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:50:51.077829   52720 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/force-systemd-env-818542/config.json ...
	I0401 20:50:51.077845   52720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/force-systemd-env-818542/config.json: {Name:mkd7a89da1b6548c562f66657759c49af660e660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:50:51.077964   52720 start.go:360] acquireMachinesLock for force-systemd-env-818542: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 20:50:51.077992   52720 start.go:364] duration metric: took 15.282µs to acquireMachinesLock for "force-systemd-env-818542"
	I0401 20:50:51.078005   52720 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-818542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.
2 ClusterName:force-systemd-env-818542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:50:51.078045   52720 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 20:50:52.903864   52245 pod_ready.go:93] pod "etcd-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.903889   52245 pod_ready.go:82] duration metric: took 4.508063117s for pod "etcd-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.903900   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.909036   52245 pod_ready.go:93] pod "kube-apiserver-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.909061   52245 pod_ready.go:82] duration metric: took 5.152939ms for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.909070   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.914050   52245 pod_ready.go:93] pod "kube-controller-manager-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.914072   52245 pod_ready.go:82] duration metric: took 4.995179ms for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.914084   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.917926   52245 pod_ready.go:93] pod "kube-proxy-9tqpq" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.917953   52245 pod_ready.go:82] duration metric: took 3.860807ms for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.917965   52245 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.922331   52245 pod_ready.go:93] pod "kube-scheduler-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:52.922353   52245 pod_ready.go:82] duration metric: took 4.381239ms for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:52.922364   52245 pod_ready.go:39] duration metric: took 12.040518015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:50:52.922384   52245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:50:52.936535   52245 ops.go:34] apiserver oom_adj: -16
	I0401 20:50:52.936559   52245 kubeadm.go:597] duration metric: took 20.299547463s to restartPrimaryControlPlane
	I0401 20:50:52.936568   52245 kubeadm.go:394] duration metric: took 20.412829628s to StartCluster
	I0401 20:50:52.936588   52245 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:50:52.936681   52245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:50:52.937480   52245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:50:52.937736   52245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.73 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:50:52.937841   52245 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:50:52.937984   52245 config.go:182] Loaded profile config "pause-854311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:50:52.939304   52245 out.go:177] * Verifying Kubernetes components...
	I0401 20:50:52.939317   52245 out.go:177] * Enabled addons: 
	I0401 20:50:51.862744   49910 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:50:51.862964   49910 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:50:50.351417   51684 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:50:50.351445   51684 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:50:50.351458   51684 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0401 20:50:52.357715   51684 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:50:52.357742   51684 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:50:52.357761   51684 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0401 20:50:54.364682   51684 api_server.go:279] https://192.168.72.171:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 20:50:54.364720   51684 api_server.go:103] status: https://192.168.72.171:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 20:50:54.364738   51684 api_server.go:253] Checking apiserver healthz at https://192.168.72.171:8443/healthz ...
	I0401 20:50:52.940499   52245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:50:52.940502   52245 addons.go:514] duration metric: took 2.671334ms for enable addons: enabled=[]
	I0401 20:50:53.110111   52245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:50:53.127757   52245 node_ready.go:35] waiting up to 6m0s for node "pause-854311" to be "Ready" ...
	I0401 20:50:53.130661   52245 node_ready.go:49] node "pause-854311" has status "Ready":"True"
	I0401 20:50:53.130681   52245 node_ready.go:38] duration metric: took 2.893977ms for node "pause-854311" to be "Ready" ...
	I0401 20:50:53.130689   52245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:50:53.300826   52245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gzdq9" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:53.700391   52245 pod_ready.go:93] pod "coredns-668d6bf9bc-gzdq9" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:53.700424   52245 pod_ready.go:82] duration metric: took 399.571945ms for pod "coredns-668d6bf9bc-gzdq9" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:53.700440   52245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.099987   52245 pod_ready.go:93] pod "etcd-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:54.100016   52245 pod_ready.go:82] duration metric: took 399.567597ms for pod "etcd-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.100025   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.501079   52245 pod_ready.go:93] pod "kube-apiserver-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:54.501101   52245 pod_ready.go:82] duration metric: took 401.069982ms for pod "kube-apiserver-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.501111   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.900845   52245 pod_ready.go:93] pod "kube-controller-manager-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:54.900873   52245 pod_ready.go:82] duration metric: took 399.753846ms for pod "kube-controller-manager-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:54.900887   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:55.300539   52245 pod_ready.go:93] pod "kube-proxy-9tqpq" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:55.300560   52245 pod_ready.go:82] duration metric: took 399.66551ms for pod "kube-proxy-9tqpq" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:55.300569   52245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:51.079765   52720 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0401 20:50:51.079899   52720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:50:51.079939   52720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:50:51.094608   52720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0401 20:50:51.095079   52720 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:50:51.095609   52720 main.go:141] libmachine: Using API Version  1
	I0401 20:50:51.095628   52720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:50:51.096017   52720 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:50:51.096241   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .GetMachineName
	I0401 20:50:51.096383   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .DriverName
	I0401 20:50:51.096514   52720 start.go:159] libmachine.API.Create for "force-systemd-env-818542" (driver="kvm2")
	I0401 20:50:51.096541   52720 client.go:168] LocalClient.Create starting
	I0401 20:50:51.096581   52720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 20:50:51.096621   52720 main.go:141] libmachine: Decoding PEM data...
	I0401 20:50:51.096645   52720 main.go:141] libmachine: Parsing certificate...
	I0401 20:50:51.096718   52720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 20:50:51.096749   52720 main.go:141] libmachine: Decoding PEM data...
	I0401 20:50:51.096770   52720 main.go:141] libmachine: Parsing certificate...
	I0401 20:50:51.096790   52720 main.go:141] libmachine: Running pre-create checks...
	I0401 20:50:51.096806   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .PreCreateCheck
	I0401 20:50:51.097097   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .GetConfigRaw
	I0401 20:50:51.097547   52720 main.go:141] libmachine: Creating machine...
	I0401 20:50:51.097574   52720 main.go:141] libmachine: (force-systemd-env-818542) Calling .Create
	I0401 20:50:51.097702   52720 main.go:141] libmachine: (force-systemd-env-818542) creating KVM machine...
	I0401 20:50:51.097721   52720 main.go:141] libmachine: (force-systemd-env-818542) creating network...
	I0401 20:50:51.098982   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | found existing default KVM network
	I0401 20:50:51.100048   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.099894   52759 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:c4:50} reservation:<nil>}
	I0401 20:50:51.101271   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.101173   52759 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201a20}
	I0401 20:50:51.101289   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | created network xml: 
	I0401 20:50:51.101302   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | <network>
	I0401 20:50:51.101315   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   <name>mk-force-systemd-env-818542</name>
	I0401 20:50:51.101329   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   <dns enable='no'/>
	I0401 20:50:51.101340   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   
	I0401 20:50:51.101355   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0401 20:50:51.101369   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |     <dhcp>
	I0401 20:50:51.101396   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0401 20:50:51.101420   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |     </dhcp>
	I0401 20:50:51.101435   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   </ip>
	I0401 20:50:51.101446   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG |   
	I0401 20:50:51.101472   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | </network>
	I0401 20:50:51.101482   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | 
	I0401 20:50:51.106960   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | trying to create private KVM network mk-force-systemd-env-818542 192.168.50.0/24...
	I0401 20:50:51.177974   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | private KVM network mk-force-systemd-env-818542 192.168.50.0/24 created
	I0401 20:50:51.178014   52720 main.go:141] libmachine: (force-systemd-env-818542) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542 ...
	I0401 20:50:51.178031   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.177956   52759 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:50:51.178048   52720 main.go:141] libmachine: (force-systemd-env-818542) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 20:50:51.178076   52720 main.go:141] libmachine: (force-systemd-env-818542) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 20:50:51.415934   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.415772   52759 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/id_rsa...
	I0401 20:50:51.746673   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.746505   52759 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/force-systemd-env-818542.rawdisk...
	I0401 20:50:51.746708   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | Writing magic tar header
	I0401 20:50:51.746769   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | Writing SSH key tar header
	I0401 20:50:51.746799   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542 (perms=drwx------)
	I0401 20:50:51.746815   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:51.746649   52759 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542 ...
	I0401 20:50:51.746847   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 20:50:51.746869   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542
	I0401 20:50:51.746883   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 20:50:51.746899   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 20:50:51.746916   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 20:50:51.746926   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 20:50:51.746938   52720 main.go:141] libmachine: (force-systemd-env-818542) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 20:50:51.746949   52720 main.go:141] libmachine: (force-systemd-env-818542) creating domain...
	I0401 20:50:51.746968   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:50:51.746980   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 20:50:51.747002   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 20:50:51.747024   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home/jenkins
	I0401 20:50:51.747047   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | checking permissions on dir: /home
	I0401 20:50:51.747059   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | skipping /home - not owner
	I0401 20:50:51.748112   52720 main.go:141] libmachine: (force-systemd-env-818542) define libvirt domain using xml: 
	I0401 20:50:51.748129   52720 main.go:141] libmachine: (force-systemd-env-818542) <domain type='kvm'>
	I0401 20:50:51.748139   52720 main.go:141] libmachine: (force-systemd-env-818542)   <name>force-systemd-env-818542</name>
	I0401 20:50:51.748146   52720 main.go:141] libmachine: (force-systemd-env-818542)   <memory unit='MiB'>2048</memory>
	I0401 20:50:51.748154   52720 main.go:141] libmachine: (force-systemd-env-818542)   <vcpu>2</vcpu>
	I0401 20:50:51.748172   52720 main.go:141] libmachine: (force-systemd-env-818542)   <features>
	I0401 20:50:51.748184   52720 main.go:141] libmachine: (force-systemd-env-818542)     <acpi/>
	I0401 20:50:51.748196   52720 main.go:141] libmachine: (force-systemd-env-818542)     <apic/>
	I0401 20:50:51.748204   52720 main.go:141] libmachine: (force-systemd-env-818542)     <pae/>
	I0401 20:50:51.748213   52720 main.go:141] libmachine: (force-systemd-env-818542)     
	I0401 20:50:51.748218   52720 main.go:141] libmachine: (force-systemd-env-818542)   </features>
	I0401 20:50:51.748230   52720 main.go:141] libmachine: (force-systemd-env-818542)   <cpu mode='host-passthrough'>
	I0401 20:50:51.748238   52720 main.go:141] libmachine: (force-systemd-env-818542)   
	I0401 20:50:51.748242   52720 main.go:141] libmachine: (force-systemd-env-818542)   </cpu>
	I0401 20:50:51.748249   52720 main.go:141] libmachine: (force-systemd-env-818542)   <os>
	I0401 20:50:51.748256   52720 main.go:141] libmachine: (force-systemd-env-818542)     <type>hvm</type>
	I0401 20:50:51.748275   52720 main.go:141] libmachine: (force-systemd-env-818542)     <boot dev='cdrom'/>
	I0401 20:50:51.748286   52720 main.go:141] libmachine: (force-systemd-env-818542)     <boot dev='hd'/>
	I0401 20:50:51.748311   52720 main.go:141] libmachine: (force-systemd-env-818542)     <bootmenu enable='no'/>
	I0401 20:50:51.748329   52720 main.go:141] libmachine: (force-systemd-env-818542)   </os>
	I0401 20:50:51.748340   52720 main.go:141] libmachine: (force-systemd-env-818542)   <devices>
	I0401 20:50:51.748352   52720 main.go:141] libmachine: (force-systemd-env-818542)     <disk type='file' device='cdrom'>
	I0401 20:50:51.748371   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/boot2docker.iso'/>
	I0401 20:50:51.748387   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target dev='hdc' bus='scsi'/>
	I0401 20:50:51.748400   52720 main.go:141] libmachine: (force-systemd-env-818542)       <readonly/>
	I0401 20:50:51.748411   52720 main.go:141] libmachine: (force-systemd-env-818542)     </disk>
	I0401 20:50:51.748444   52720 main.go:141] libmachine: (force-systemd-env-818542)     <disk type='file' device='disk'>
	I0401 20:50:51.748461   52720 main.go:141] libmachine: (force-systemd-env-818542)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 20:50:51.748476   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/force-systemd-env-818542/force-systemd-env-818542.rawdisk'/>
	I0401 20:50:51.748487   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target dev='hda' bus='virtio'/>
	I0401 20:50:51.748497   52720 main.go:141] libmachine: (force-systemd-env-818542)     </disk>
	I0401 20:50:51.748507   52720 main.go:141] libmachine: (force-systemd-env-818542)     <interface type='network'>
	I0401 20:50:51.748518   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source network='mk-force-systemd-env-818542'/>
	I0401 20:50:51.748534   52720 main.go:141] libmachine: (force-systemd-env-818542)       <model type='virtio'/>
	I0401 20:50:51.748546   52720 main.go:141] libmachine: (force-systemd-env-818542)     </interface>
	I0401 20:50:51.748557   52720 main.go:141] libmachine: (force-systemd-env-818542)     <interface type='network'>
	I0401 20:50:51.748570   52720 main.go:141] libmachine: (force-systemd-env-818542)       <source network='default'/>
	I0401 20:50:51.748582   52720 main.go:141] libmachine: (force-systemd-env-818542)       <model type='virtio'/>
	I0401 20:50:51.748605   52720 main.go:141] libmachine: (force-systemd-env-818542)     </interface>
	I0401 20:50:51.748621   52720 main.go:141] libmachine: (force-systemd-env-818542)     <serial type='pty'>
	I0401 20:50:51.748633   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target port='0'/>
	I0401 20:50:51.748644   52720 main.go:141] libmachine: (force-systemd-env-818542)     </serial>
	I0401 20:50:51.748657   52720 main.go:141] libmachine: (force-systemd-env-818542)     <console type='pty'>
	I0401 20:50:51.748668   52720 main.go:141] libmachine: (force-systemd-env-818542)       <target type='serial' port='0'/>
	I0401 20:50:51.748678   52720 main.go:141] libmachine: (force-systemd-env-818542)     </console>
	I0401 20:50:51.748693   52720 main.go:141] libmachine: (force-systemd-env-818542)     <rng model='virtio'>
	I0401 20:50:51.748707   52720 main.go:141] libmachine: (force-systemd-env-818542)       <backend model='random'>/dev/random</backend>
	I0401 20:50:51.748717   52720 main.go:141] libmachine: (force-systemd-env-818542)     </rng>
	I0401 20:50:51.748727   52720 main.go:141] libmachine: (force-systemd-env-818542)     
	I0401 20:50:51.748736   52720 main.go:141] libmachine: (force-systemd-env-818542)     
	I0401 20:50:51.748745   52720 main.go:141] libmachine: (force-systemd-env-818542)   </devices>
	I0401 20:50:51.748762   52720 main.go:141] libmachine: (force-systemd-env-818542) </domain>
	I0401 20:50:51.748772   52720 main.go:141] libmachine: (force-systemd-env-818542) 
	I0401 20:50:51.752967   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:52:17:9b in network default
	I0401 20:50:51.753537   52720 main.go:141] libmachine: (force-systemd-env-818542) starting domain...
	I0401 20:50:51.753557   52720 main.go:141] libmachine: (force-systemd-env-818542) ensuring networks are active...
	I0401 20:50:51.753582   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:51.754259   52720 main.go:141] libmachine: (force-systemd-env-818542) Ensuring network default is active
	I0401 20:50:51.754611   52720 main.go:141] libmachine: (force-systemd-env-818542) Ensuring network mk-force-systemd-env-818542 is active
	I0401 20:50:51.755194   52720 main.go:141] libmachine: (force-systemd-env-818542) getting domain XML...
	I0401 20:50:51.755996   52720 main.go:141] libmachine: (force-systemd-env-818542) creating domain...
	I0401 20:50:53.004542   52720 main.go:141] libmachine: (force-systemd-env-818542) waiting for IP...
	I0401 20:50:53.005503   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:53.006090   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:53.006146   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:53.006078   52759 retry.go:31] will retry after 238.360538ms: waiting for domain to come up
	I0401 20:50:53.246711   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:53.247369   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:53.247401   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:53.247311   52759 retry.go:31] will retry after 378.94785ms: waiting for domain to come up
	I0401 20:50:53.627928   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:53.628417   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:53.628440   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:53.628378   52759 retry.go:31] will retry after 474.609475ms: waiting for domain to come up
	I0401 20:50:54.105074   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:54.105633   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:54.105661   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:54.105598   52759 retry.go:31] will retry after 402.97083ms: waiting for domain to come up
	I0401 20:50:54.510323   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:54.510817   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:54.510857   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:54.510812   52759 retry.go:31] will retry after 705.269755ms: waiting for domain to come up
	I0401 20:50:55.218477   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | domain force-systemd-env-818542 has defined MAC address 52:54:00:2f:d2:f9 in network mk-force-systemd-env-818542
	I0401 20:50:55.218964   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | unable to find current IP address of domain force-systemd-env-818542 in network mk-force-systemd-env-818542
	I0401 20:50:55.218998   52720 main.go:141] libmachine: (force-systemd-env-818542) DBG | I0401 20:50:55.218914   52759 retry.go:31] will retry after 798.06074ms: waiting for domain to come up
	I0401 20:50:55.701159   52245 pod_ready.go:93] pod "kube-scheduler-pause-854311" in "kube-system" namespace has status "Ready":"True"
	I0401 20:50:55.701184   52245 pod_ready.go:82] duration metric: took 400.609485ms for pod "kube-scheduler-pause-854311" in "kube-system" namespace to be "Ready" ...
	I0401 20:50:55.701192   52245 pod_ready.go:39] duration metric: took 2.570493716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 20:50:55.701206   52245 api_server.go:52] waiting for apiserver process to appear ...
	I0401 20:50:55.701262   52245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:50:55.717047   52245 api_server.go:72] duration metric: took 2.779277384s to wait for apiserver process to appear ...
	I0401 20:50:55.717077   52245 api_server.go:88] waiting for apiserver healthz status ...
	I0401 20:50:55.717095   52245 api_server.go:253] Checking apiserver healthz at https://192.168.83.73:8443/healthz ...
	I0401 20:50:55.722955   52245 api_server.go:279] https://192.168.83.73:8443/healthz returned 200:
	ok
	I0401 20:50:55.724073   52245 api_server.go:141] control plane version: v1.32.2
	I0401 20:50:55.724092   52245 api_server.go:131] duration metric: took 7.009068ms to wait for apiserver health ...
	I0401 20:50:55.724100   52245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 20:50:55.900325   52245 system_pods.go:59] 6 kube-system pods found
	I0401 20:50:55.900351   52245 system_pods.go:61] "coredns-668d6bf9bc-gzdq9" [f6050659-c76a-4a4d-8993-cd155122c2ca] Running
	I0401 20:50:55.900368   52245 system_pods.go:61] "etcd-pause-854311" [cd2cb077-bc5e-439e-b9e5-6b30256b863c] Running
	I0401 20:50:55.900372   52245 system_pods.go:61] "kube-apiserver-pause-854311" [84f99deb-1137-4aa0-9487-60e6a24c0855] Running
	I0401 20:50:55.900375   52245 system_pods.go:61] "kube-controller-manager-pause-854311" [0ebd174b-9608-4ee4-86f7-9239b3086751] Running
	I0401 20:50:55.900378   52245 system_pods.go:61] "kube-proxy-9tqpq" [5ed694bf-68e5-4bc0-9fbe-8df6e74dc624] Running
	I0401 20:50:55.900380   52245 system_pods.go:61] "kube-scheduler-pause-854311" [e53549c5-3e7b-499d-ba8c-731cca4d0ba3] Running
	I0401 20:50:55.900386   52245 system_pods.go:74] duration metric: took 176.281978ms to wait for pod list to return data ...
	I0401 20:50:55.900394   52245 default_sa.go:34] waiting for default service account to be created ...
	I0401 20:50:56.099504   52245 default_sa.go:45] found service account: "default"
	I0401 20:50:56.099544   52245 default_sa.go:55] duration metric: took 199.143021ms for default service account to be created ...
	I0401 20:50:56.099556   52245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 20:50:56.300920   52245 system_pods.go:86] 6 kube-system pods found
	I0401 20:50:56.300964   52245 system_pods.go:89] "coredns-668d6bf9bc-gzdq9" [f6050659-c76a-4a4d-8993-cd155122c2ca] Running
	I0401 20:50:56.300974   52245 system_pods.go:89] "etcd-pause-854311" [cd2cb077-bc5e-439e-b9e5-6b30256b863c] Running
	I0401 20:50:56.300982   52245 system_pods.go:89] "kube-apiserver-pause-854311" [84f99deb-1137-4aa0-9487-60e6a24c0855] Running
	I0401 20:50:56.300989   52245 system_pods.go:89] "kube-controller-manager-pause-854311" [0ebd174b-9608-4ee4-86f7-9239b3086751] Running
	I0401 20:50:56.300995   52245 system_pods.go:89] "kube-proxy-9tqpq" [5ed694bf-68e5-4bc0-9fbe-8df6e74dc624] Running
	I0401 20:50:56.301002   52245 system_pods.go:89] "kube-scheduler-pause-854311" [e53549c5-3e7b-499d-ba8c-731cca4d0ba3] Running
	I0401 20:50:56.301016   52245 system_pods.go:126] duration metric: took 201.452979ms to wait for k8s-apps to be running ...
	I0401 20:50:56.301031   52245 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 20:50:56.301086   52245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:50:56.317527   52245 system_svc.go:56] duration metric: took 16.48563ms WaitForService to wait for kubelet
	I0401 20:50:56.317577   52245 kubeadm.go:582] duration metric: took 3.379814128s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:50:56.317599   52245 node_conditions.go:102] verifying NodePressure condition ...
	I0401 20:50:56.499909   52245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 20:50:56.499932   52245 node_conditions.go:123] node cpu capacity is 2
	I0401 20:50:56.499946   52245 node_conditions.go:105] duration metric: took 182.340888ms to run NodePressure ...
	I0401 20:50:56.499960   52245 start.go:241] waiting for startup goroutines ...
	I0401 20:50:56.499969   52245 start.go:246] waiting for cluster config update ...
	I0401 20:50:56.499980   52245 start.go:255] writing updated cluster config ...
	I0401 20:50:56.500267   52245 ssh_runner.go:195] Run: rm -f paused
	I0401 20:50:56.552086   52245 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 20:50:56.555291   52245 out.go:177] * Done! kubectl is now configured to use "pause-854311" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.340709316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540659340417914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=097970e5-be37-4089-be3f-414f20f56973 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.341812626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9d39997-7ea9-4669-a5ba-985024286fb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.341886577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9d39997-7ea9-4669-a5ba-985024286fb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.342404346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9d39997-7ea9-4669-a5ba-985024286fb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.404741651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24a0235c-5355-4961-890e-60d2beb882ce name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.405029065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24a0235c-5355-4961-890e-60d2beb882ce name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.406601364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f641caaa-38ad-446b-a734-c569e082a6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.407278621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540659407231111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f641caaa-38ad-446b-a734-c569e082a6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.408211051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00b89540-6daf-4c4f-b497-4b8227169bdb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.408304841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00b89540-6daf-4c4f-b497-4b8227169bdb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.408637354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00b89540-6daf-4c4f-b497-4b8227169bdb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.469701086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e5add06-7258-4652-843e-1742f671b5a5 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.469807612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e5add06-7258-4652-843e-1742f671b5a5 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.471255150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a645a957-a02e-43dd-901c-e02b6dcb2270 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.471725423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540659471696851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a645a957-a02e-43dd-901c-e02b6dcb2270 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.472602869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5d8fdc2-a616-4447-b350-2206ec3feb8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.472690314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5d8fdc2-a616-4447-b350-2206ec3feb8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.473098779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5d8fdc2-a616-4447-b350-2206ec3feb8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.532511170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4cc89eb-035a-4fad-9bfd-ed27ea5885c6 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.532619804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4cc89eb-035a-4fad-9bfd-ed27ea5885c6 name=/runtime.v1.RuntimeService/Version
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.533689551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=090073d6-cd50-4955-b768-22d6140b626b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.534577213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540659534541376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=090073d6-cd50-4955-b768-22d6140b626b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.535247991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de8e5a09-9cab-4dba-8c98-5eb2aae1cac6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.535352118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de8e5a09-9cab-4dba-8c98-5eb2aae1cac6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 20:50:59 pause-854311 crio[2364]: time="2025-04-01 20:50:59.535735052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8,PodSandboxId:8daea8b16c3a4894e51ad5da098e92ae72c01ef548fb3c2ab82c23ecaa063857,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1743540640449486671,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b,PodSandboxId:4953e752dbeebade446e3b1c95c07721f83daca571e4cae56097a7f7d049b747,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1743540640092740512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78,PodSandboxId:87a792d3b6a4af0181637f2438045e1fd48ff18ce6ca56ac28d285365dd9587a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1743540635403816034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add
0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8,PodSandboxId:a5150c2dac57ad52d54f33d8ac234f25bfb06b6bf527b18050d11b2f36c55272,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1743540635323756080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d,PodSandboxId:d68e06b3bbf0b1d292d28403b26dc5b8fa5e105ca26bdb80de5d3b068e4d6664,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1743540635372919392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c,PodSandboxId:dd316d1c22d11e7831cbf9c5f34bda1b1d3c7a4af8d9afa872f43520ed61cd7c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1743540635344135826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.
kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c,PodSandboxId:2fb90e16f6a2e1ac9dfcd6050e97ae4d17e7a6080c0fa33215ac41a08f63a42c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1743540629610632444,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27fedc8add0898ac9257704542d56e6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 4c5aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf,PodSandboxId:aa75d439623f8876addf092d01d2d2c05e3a49a2ea76310b2c2a9b459f86a4ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1743540614414861825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-gzdq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6050659-c76a-4a4d-8993-cd155122c2ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.c
ontainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133,PodSandboxId:3a7633f26f65422a4b1c82b53f44630d0234c9f97fbcb29ab81fbb0fcf91b6f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1743540613981069431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9
tqpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ed694bf-68e5-4bc0-9fbe-8df6e74dc624,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d,PodSandboxId:4e315b7b9873c9c9e2fbc723e796b00054b6cae74cb2d108154dda721e830b90,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1743540603390718896,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manag
er-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa48a7978c865626134f5afa740e2bed,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7,PodSandboxId:1065b5bcae2d9c2ab154ab88d09e60c3ee541403701e7e120cc67ce09d30bf49,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1743540603269691113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-854311,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 975b9d955e5feb4a58342eeda484cc5a,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315,PodSandboxId:abd463b97fb57be88be3ac2887c4de1d99199ec8b513384d6582a8ed4e953211,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1743540603245700298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-854311,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ef42574803c91e1d1b0e271affc3fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de8e5a09-9cab-4dba-8c98-5eb2aae1cac6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6565e7a5d03e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   1                   8daea8b16c3a4       coredns-668d6bf9bc-gzdq9
	55d6822c5af0a       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   19 seconds ago      Running             kube-proxy                1                   4953e752dbeeb       kube-proxy-9tqpq
	816d76695fcb8       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   24 seconds ago      Running             kube-scheduler            2                   87a792d3b6a4a       kube-scheduler-pause-854311
	8bde402d68f35       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   24 seconds ago      Running             etcd                      1                   d68e06b3bbf0b       etcd-pause-854311
	207d3beaf5233       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   24 seconds ago      Running             kube-apiserver            1                   dd316d1c22d11       kube-apiserver-pause-854311
	7297d75181efd       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   24 seconds ago      Running             kube-controller-manager   1                   a5150c2dac57a       kube-controller-manager-pause-854311
	8692daa04dec4       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   30 seconds ago      Exited              kube-scheduler            1                   2fb90e16f6a2e       kube-scheduler-pause-854311
	47d974cf94529       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   45 seconds ago      Exited              coredns                   0                   aa75d439623f8       coredns-668d6bf9bc-gzdq9
	980d18164dbf2       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   45 seconds ago      Exited              kube-proxy                0                   3a7633f26f654       kube-proxy-9tqpq
	a6b36114da1c5       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   56 seconds ago      Exited              kube-controller-manager   0                   4e315b7b9873c       kube-controller-manager-pause-854311
	91afd2984825c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   56 seconds ago      Exited              etcd                      0                   1065b5bcae2d9       etcd-pause-854311
	0d57ab471258d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   56 seconds ago      Exited              kube-apiserver            0                   abd463b97fb57       kube-apiserver-pause-854311
	
	
	==> coredns [47d974cf94529cf52b7d3d9e738bad05dd7a3286a97aedfe8750b879f1cdaeaf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57772 - 24853 "HINFO IN 3240051374241671829.1070327449255104046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032883881s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6565e7a5d03e9888a560be77a43241f3093a818bf1497fcb3d28dde786a2a8c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49140 - 28287 "HINFO IN 7333211155852405535.2950304355677977995. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033649413s
	
	
	==> describe nodes <==
	Name:               pause-854311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-854311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=pause-854311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_50_09_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-854311
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:50:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Apr 2025 20:50:38 +0000   Tue, 01 Apr 2025 20:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.73
	  Hostname:    pause-854311
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 21aa8da5f9d84ade9ab07ff7e5125a73
	  System UUID:                21aa8da5-f9d8-4ade-9ab0-7ff7e5125a73
	  Boot ID:                    9c0a3895-9dba-4ad5-be55-ef1496052e35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-gzdq9                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     46s
	  kube-system                 etcd-pause-854311                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         50s
	  kube-system                 kube-apiserver-pause-854311             250m (12%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-controller-manager-pause-854311    200m (10%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-9tqpq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-scheduler-pause-854311             100m (5%)     0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 45s                kube-proxy       
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node pause-854311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node pause-854311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x7 over 57s)  kubelet          Node pause-854311 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s                kubelet          Node pause-854311 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    50s                kubelet          Node pause-854311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s                kubelet          Node pause-854311 status is now: NodeHasSufficientPID
	  Normal  NodeReady                50s                kubelet          Node pause-854311 status is now: NodeReady
	  Normal  RegisteredNode           47s                node-controller  Node pause-854311 event: Registered Node pause-854311 in Controller
	  Normal  CIDRAssignmentFailed     47s                cidrAllocator    Node pause-854311 status is now: CIDRAssignmentFailed
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-854311 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-854311 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-854311 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-854311 event: Registered Node pause-854311 in Controller
	
	
	==> dmesg <==
	[  +0.059188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074324] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.196543] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.148810] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.314116] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +4.870459] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +0.059076] kauditd_printk_skb: 130 callbacks suppressed
	[Apr 1 20:50] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.528004] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.061194] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.087395] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.340637] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.858105] kauditd_printk_skb: 43 callbacks suppressed
	[ +14.377700] systemd-fstab-generator[2072]: Ignoring "noauto" option for root device
	[  +0.082132] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.098011] systemd-fstab-generator[2085]: Ignoring "noauto" option for root device
	[  +0.198814] systemd-fstab-generator[2098]: Ignoring "noauto" option for root device
	[  +0.167847] systemd-fstab-generator[2110]: Ignoring "noauto" option for root device
	[  +0.487952] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +2.535713] systemd-fstab-generator[2471]: Ignoring "noauto" option for root device
	[  +2.463660] systemd-fstab-generator[2594]: Ignoring "noauto" option for root device
	[  +0.086794] kauditd_printk_skb: 153 callbacks suppressed
	[  +5.516789] kauditd_printk_skb: 54 callbacks suppressed
	[  +8.255212] kauditd_printk_skb: 21 callbacks suppressed
	[  +4.815774] systemd-fstab-generator[3309]: Ignoring "noauto" option for root device
	
	
	==> etcd [8bde402d68f357e543f94d421e13e8ddd7f61cb906a2e92ce79355e360cacd6d] <==
	{"level":"info","ts":"2025-04-01T20:50:35.835669Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b8c5aec97845a467","local-member-id":"18962caea3d343e5","added-peer-id":"18962caea3d343e5","added-peer-peer-urls":["https://192.168.83.73:2380"]}
	{"level":"info","ts":"2025-04-01T20:50:35.835752Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b8c5aec97845a467","local-member-id":"18962caea3d343e5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:35.835795Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:35.841390Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:35.845182Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-01T20:50:35.845475Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"18962caea3d343e5","initial-advertise-peer-urls":["https://192.168.83.73:2380"],"listen-peer-urls":["https://192.168.83.73:2380"],"advertise-client-urls":["https://192.168.83.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:50:35.845521Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:50:35.845599Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:35.845622Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:37.115431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:37.115604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:37.115675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 received MsgPreVoteResp from 18962caea3d343e5 at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:37.115731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.115760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 received MsgVoteResp from 18962caea3d343e5 at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.115790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.115818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18962caea3d343e5 elected leader 18962caea3d343e5 at term 3"}
	{"level":"info","ts":"2025-04-01T20:50:37.120868Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"18962caea3d343e5","local-member-attributes":"{Name:pause-854311 ClientURLs:[https://192.168.83.73:2379]}","request-path":"/0/members/18962caea3d343e5/attributes","cluster-id":"b8c5aec97845a467","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:50:37.121365Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:37.122757Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:37.123855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.73:2379"}
	{"level":"info","ts":"2025-04-01T20:50:37.124742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:37.125622Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:37.127230Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:50:37.125744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:50:37.137066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [91afd2984825c7a8844236cf23bca3ccbbbe2065e1a5f35b465c4afaf1b097d7] <==
	{"level":"info","ts":"2025-04-01T20:50:03.928178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18962caea3d343e5 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:03.928185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18962caea3d343e5 elected leader 18962caea3d343e5 at term 2"}
	{"level":"info","ts":"2025-04-01T20:50:03.932210Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.934397Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"18962caea3d343e5","local-member-attributes":"{Name:pause-854311 ClientURLs:[https://192.168.83.73:2379]}","request-path":"/0/members/18962caea3d343e5/attributes","cluster-id":"b8c5aec97845a467","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:50:03.934449Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:03.934818Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:50:03.935138Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b8c5aec97845a467","local-member-id":"18962caea3d343e5","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.935254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.935310Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:50:03.935751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:03.940545Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:50:03.945342Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:50:03.950215Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.73:2379"}
	{"level":"info","ts":"2025-04-01T20:50:03.967050Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:50:03.975011Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:50:21.681665Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-01T20:50:21.681733Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-854311","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.73:2380"],"advertise-client-urls":["https://192.168.83.73:2379"]}
	{"level":"warn","ts":"2025-04-01T20:50:21.681857Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:50:21.681941Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:50:21.711621Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.73:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-01T20:50:21.711799Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.73:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-01T20:50:21.712140Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"18962caea3d343e5","current-leader-member-id":"18962caea3d343e5"}
	{"level":"info","ts":"2025-04-01T20:50:21.716528Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:21.716754Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.83.73:2380"}
	{"level":"info","ts":"2025-04-01T20:50:21.716856Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-854311","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.73:2380"],"advertise-client-urls":["https://192.168.83.73:2379"]}
	
	
	==> kernel <==
	 20:51:00 up 1 min,  0 users,  load average: 1.33, 0.39, 0.13
	Linux pause-854311 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0d57ab471258d185e9b4a15b22ca60210c4eed9fe2d14e5cbd61617fa10b5315] <==
	I0401 20:50:06.958241       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:50:06.967174       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:50:06.967213       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:50:07.795520       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:50:07.884853       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:50:08.019659       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:50:08.027468       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.73]
	I0401 20:50:08.028721       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:50:08.033863       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:50:08.046773       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:50:08.967556       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:50:08.982185       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:50:09.000233       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:50:13.451202       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:50:13.504050       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0401 20:50:21.677751       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0401 20:50:21.691679       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.691789       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.691839       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.691925       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.692658       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.692904       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.695451       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.696063       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 20:50:21.697622       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [207d3beaf52339693a975a80de123ce12a39c53006bebc32091829aceb53790c] <==
	E0401 20:50:38.754119       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 20:50:38.765080       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0401 20:50:38.788917       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0401 20:50:38.789776       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0401 20:50:38.789875       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0401 20:50:38.790227       1 shared_informer.go:320] Caches are synced for configmaps
	I0401 20:50:38.794056       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:50:38.794125       1 policy_source.go:240] refreshing policies
	I0401 20:50:38.795068       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 20:50:38.807394       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0401 20:50:38.808406       1 aggregator.go:171] initial CRD sync complete...
	I0401 20:50:38.808456       1 autoregister_controller.go:144] Starting autoregister controller
	I0401 20:50:38.808465       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:50:38.808470       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:50:38.816787       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0401 20:50:38.848115       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:50:39.604656       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:50:39.747152       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:50:40.643606       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:50:40.730654       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:50:40.806655       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:50:40.838810       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:50:42.136036       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:50:42.284591       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:50:48.190504       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7297d75181efd66cd5ce766036eabdbb1f314b85247bb944574d0c50cdfdd7f8] <==
	I0401 20:50:41.982269       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0401 20:50:41.982041       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0401 20:50:41.982056       1 shared_informer.go:320] Caches are synced for disruption
	I0401 20:50:41.982063       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0401 20:50:41.984504       1 shared_informer.go:320] Caches are synced for deployment
	I0401 20:50:41.984619       1 shared_informer.go:320] Caches are synced for node
	I0401 20:50:41.984751       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0401 20:50:41.984938       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0401 20:50:41.985024       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:50:41.985046       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:50:41.985223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:41.990069       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0401 20:50:41.990303       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:50:42.000317       1 shared_informer.go:320] Caches are synced for HPA
	I0401 20:50:42.003741       1 shared_informer.go:320] Caches are synced for expand
	I0401 20:50:42.008212       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0401 20:50:42.017528       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:50:42.027050       1 shared_informer.go:320] Caches are synced for namespace
	I0401 20:50:42.029585       1 shared_informer.go:320] Caches are synced for PV protection
	I0401 20:50:42.030742       1 shared_informer.go:320] Caches are synced for endpoint
	I0401 20:50:42.033264       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:50:42.040660       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0401 20:50:42.043129       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:50:48.199581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="37.118801ms"
	I0401 20:50:48.202830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="244.05µs"
	
	
	==> kube-controller-manager [a6b36114da1c55efc95dfda3b3946fcbb0716a1d2d428d3a60e42670f0df029d] <==
	I0401 20:50:12.647400       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:50:12.647692       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0401 20:50:12.648900       1 shared_informer.go:320] Caches are synced for GC
	I0401 20:50:12.649236       1 shared_informer.go:320] Caches are synced for stateful set
	I0401 20:50:12.649285       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0401 20:50:12.650440       1 shared_informer.go:320] Caches are synced for PV protection
	I0401 20:50:12.657099       1 shared_informer.go:320] Caches are synced for crt configmap
	E0401 20:50:12.659563       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"pause-854311\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-854311" podCIDRs=["10.244.1.0/24"]
	E0401 20:50:12.659786       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"pause-854311\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-854311"
	E0401 20:50:12.661664       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'pause-854311': failed to patch node CIDR: Node \"pause-854311\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0401 20:50:12.661813       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:12.663084       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:50:12.665058       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:50:12.667453       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:12.907020       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	I0401 20:50:13.769084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="303.592273ms"
	I0401 20:50:13.796863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="27.607736ms"
	I0401 20:50:13.817351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.416003ms"
	I0401 20:50:13.818054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="199.998µs"
	I0401 20:50:13.833458       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="78.599µs"
	I0401 20:50:15.141297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="96.871µs"
	I0401 20:50:15.177536       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="77.007µs"
	I0401 20:50:15.188681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="44.744µs"
	I0401 20:50:15.194628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="75.896µs"
	I0401 20:50:19.357897       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-854311"
	
	
	==> kube-proxy [55d6822c5af0a6faf456ef175c9d3deff43082fe0f2b72c578150f6473da2e4b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0401 20:50:40.430338       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0401 20:50:40.454242       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.73"]
	E0401 20:50:40.454433       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:50:40.537267       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0401 20:50:40.537371       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 20:50:40.537406       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:50:40.540453       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:50:40.541285       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:50:40.541318       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:50:40.544479       1 config.go:199] "Starting service config controller"
	I0401 20:50:40.544530       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:50:40.544570       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:50:40.544593       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:50:40.548702       1 config.go:329] "Starting node config controller"
	I0401 20:50:40.548735       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:50:40.644749       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:50:40.644832       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:50:40.654800       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [980d18164dbf2f6ba4d2cf6ff26452099b25fb224149014210edbf632e0ba133] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0401 20:50:14.362464       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0401 20:50:14.407094       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.83.73"]
	E0401 20:50:14.407576       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:50:14.514717       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0401 20:50:14.514767       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 20:50:14.514912       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:50:14.523336       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:50:14.524559       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:50:14.524867       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:50:14.528566       1 config.go:199] "Starting service config controller"
	I0401 20:50:14.528864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:50:14.529069       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:50:14.529099       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:50:14.534297       1 config.go:329] "Starting node config controller"
	I0401 20:50:14.534403       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:50:14.630010       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:50:14.630056       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:50:14.634926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [816d76695fcb8b11d1ab8a0882e6a76b31ff5c19ee3fd35855a7d62725b69f78] <==
	I0401 20:50:36.115386       1 serving.go:386] Generated self-signed cert in-memory
	I0401 20:50:38.774192       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:50:38.774349       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:50:38.782728       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0401 20:50:38.783367       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0401 20:50:38.783579       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:50:38.783610       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:50:38.783733       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0401 20:50:38.783826       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0401 20:50:38.785802       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:50:38.786751       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:50:38.884188       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0401 20:50:38.884417       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0401 20:50:38.884447       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8692daa04dec42f439dcc433f24d37c4137a9802b76f108efb17c4b5c642111c] <==
	
	
	==> kubelet <==
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.773078    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.850371    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.866247    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-854311\" already exists" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.893701    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-854311\" already exists" pod="kube-system/kube-apiserver-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.893890    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.914949    2601 kubelet_node_status.go:125] "Node was previously registered" node="pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.915332    2601 kubelet_node_status.go:79] "Successfully registered node" node="pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.915464    2601 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.917139    2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.927780    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-854311\" already exists" pod="kube-system/kube-controller-manager-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.927849    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.943937    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-854311\" already exists" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: I0401 20:50:38.944185    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-854311"
	Apr 01 20:50:38 pause-854311 kubelet[2601]: E0401 20:50:38.956169    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-854311\" already exists" pod="kube-system/etcd-pause-854311"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.195054    2601 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: E0401 20:50:39.204305    2601 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-854311\" already exists" pod="kube-system/kube-scheduler-pause-854311"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.578944    2601 apiserver.go:52] "Watching apiserver"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.673072    2601 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.742343    2601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed694bf-68e5-4bc0-9fbe-8df6e74dc624-lib-modules\") pod \"kube-proxy-9tqpq\" (UID: \"5ed694bf-68e5-4bc0-9fbe-8df6e74dc624\") " pod="kube-system/kube-proxy-9tqpq"
	Apr 01 20:50:39 pause-854311 kubelet[2601]: I0401 20:50:39.742420    2601 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed694bf-68e5-4bc0-9fbe-8df6e74dc624-xtables-lock\") pod \"kube-proxy-9tqpq\" (UID: \"5ed694bf-68e5-4bc0-9fbe-8df6e74dc624\") " pod="kube-system/kube-proxy-9tqpq"
	Apr 01 20:50:44 pause-854311 kubelet[2601]: E0401 20:50:44.745111    2601 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540644744684259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:50:44 pause-854311 kubelet[2601]: E0401 20:50:44.745142    2601 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540644744684259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:50:48 pause-854311 kubelet[2601]: I0401 20:50:48.140162    2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 01 20:50:54 pause-854311 kubelet[2601]: E0401 20:50:54.751238    2601 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540654750746610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:50:54 pause-854311 kubelet[2601]: E0401 20:50:54.751273    2601 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540654750746610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-854311 -n pause-854311
helpers_test.go:261: (dbg) Run:  kubectl --context pause-854311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (40.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (280.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-582207 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-582207 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m40.344884035s)

                                                
                                                
-- stdout --
	* [old-k8s-version-582207] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-582207" primary control-plane node in "old-k8s-version-582207" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:53:42.647020   57531 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:53:42.647146   57531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:53:42.647155   57531 out.go:358] Setting ErrFile to fd 2...
	I0401 20:53:42.647162   57531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:53:42.647359   57531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:53:42.647957   57531 out.go:352] Setting JSON to false
	I0401 20:53:42.648884   57531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5767,"bootTime":1743535056,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:53:42.648979   57531 start.go:139] virtualization: kvm guest
	I0401 20:53:42.651099   57531 out.go:177] * [old-k8s-version-582207] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:53:42.652321   57531 notify.go:220] Checking for updates...
	I0401 20:53:42.652330   57531 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:53:42.653662   57531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:53:42.655057   57531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:53:42.656451   57531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:53:42.657787   57531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:53:42.659007   57531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:53:42.660506   57531 config.go:182] Loaded profile config "cert-expiration-808084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:53:42.660637   57531 config.go:182] Loaded profile config "cert-options-454573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:53:42.660736   57531 config.go:182] Loaded profile config "kubernetes-upgrade-881088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:53:42.660843   57531 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:53:42.698545   57531 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 20:53:42.699608   57531 start.go:297] selected driver: kvm2
	I0401 20:53:42.699622   57531 start.go:901] validating driver "kvm2" against <nil>
	I0401 20:53:42.699633   57531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:53:42.700339   57531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:53:42.700441   57531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 20:53:42.717330   57531 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 20:53:42.717377   57531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:53:42.717608   57531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:53:42.717638   57531 cni.go:84] Creating CNI manager for ""
	I0401 20:53:42.717677   57531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:53:42.717685   57531 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 20:53:42.717734   57531 start.go:340] cluster config:
	{Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:53:42.717823   57531 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:53:42.719682   57531 out.go:177] * Starting "old-k8s-version-582207" primary control-plane node in "old-k8s-version-582207" cluster
	I0401 20:53:42.721011   57531 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:53:42.721057   57531 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 20:53:42.721070   57531 cache.go:56] Caching tarball of preloaded images
	I0401 20:53:42.721178   57531 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:53:42.721191   57531 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 20:53:42.721318   57531 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/config.json ...
	I0401 20:53:42.721347   57531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/config.json: {Name:mk95d37a422d6a7e7ca7572b29d73f5b24f4d42c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:53:42.721506   57531 start.go:360] acquireMachinesLock for old-k8s-version-582207: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 20:53:49.884622   57531 start.go:364] duration metric: took 7.163094375s to acquireMachinesLock for "old-k8s-version-582207"
	I0401 20:53:49.884703   57531 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:53:49.884811   57531 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 20:53:49.991809   57531 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 20:53:49.992085   57531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:53:49.992177   57531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:53:50.008365   57531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0401 20:53:50.010366   57531 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:53:50.010944   57531 main.go:141] libmachine: Using API Version  1
	I0401 20:53:50.010967   57531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:53:50.011368   57531 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:53:50.011642   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetMachineName
	I0401 20:53:50.011874   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:53:50.012159   57531 start.go:159] libmachine.API.Create for "old-k8s-version-582207" (driver="kvm2")
	I0401 20:53:50.012197   57531 client.go:168] LocalClient.Create starting
	I0401 20:53:50.012246   57531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 20:53:50.012302   57531 main.go:141] libmachine: Decoding PEM data...
	I0401 20:53:50.012328   57531 main.go:141] libmachine: Parsing certificate...
	I0401 20:53:50.012422   57531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 20:53:50.012452   57531 main.go:141] libmachine: Decoding PEM data...
	I0401 20:53:50.012472   57531 main.go:141] libmachine: Parsing certificate...
	I0401 20:53:50.012495   57531 main.go:141] libmachine: Running pre-create checks...
	I0401 20:53:50.012513   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .PreCreateCheck
	I0401 20:53:50.012912   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetConfigRaw
	I0401 20:53:50.013434   57531 main.go:141] libmachine: Creating machine...
	I0401 20:53:50.013453   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .Create
	I0401 20:53:50.013631   57531 main.go:141] libmachine: (old-k8s-version-582207) creating KVM machine...
	I0401 20:53:50.013654   57531 main.go:141] libmachine: (old-k8s-version-582207) creating network...
	I0401 20:53:50.015218   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found existing default KVM network
	I0401 20:53:50.016382   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:50.016172   57595 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:c4:50} reservation:<nil>}
	I0401 20:53:50.017551   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:50.017450   57595 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002019f0}
	I0401 20:53:50.017617   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | created network xml: 
	I0401 20:53:50.017644   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | <network>
	I0401 20:53:50.017660   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |   <name>mk-old-k8s-version-582207</name>
	I0401 20:53:50.017672   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |   <dns enable='no'/>
	I0401 20:53:50.017683   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |   
	I0401 20:53:50.017696   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0401 20:53:50.017711   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |     <dhcp>
	I0401 20:53:50.017724   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0401 20:53:50.017736   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |     </dhcp>
	I0401 20:53:50.017757   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |   </ip>
	I0401 20:53:50.017767   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG |   
	I0401 20:53:50.017773   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | </network>
	I0401 20:53:50.017788   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | 
	I0401 20:53:50.106014   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | trying to create private KVM network mk-old-k8s-version-582207 192.168.50.0/24...
	I0401 20:53:50.193334   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | private KVM network mk-old-k8s-version-582207 192.168.50.0/24 created
	I0401 20:53:50.193371   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:50.193300   57595 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:53:50.193399   57531 main.go:141] libmachine: (old-k8s-version-582207) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207 ...
	I0401 20:53:50.193429   57531 main.go:141] libmachine: (old-k8s-version-582207) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 20:53:50.193451   57531 main.go:141] libmachine: (old-k8s-version-582207) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 20:53:50.460331   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:50.460162   57595 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa...
	I0401 20:53:50.938551   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:50.938391   57595 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/old-k8s-version-582207.rawdisk...
	I0401 20:53:50.938588   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | Writing magic tar header
	I0401 20:53:50.938607   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | Writing SSH key tar header
	I0401 20:53:50.938621   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:50.938534   57595 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207 ...
	I0401 20:53:50.938714   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207
	I0401 20:53:50.938737   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 20:53:50.938751   57531 main.go:141] libmachine: (old-k8s-version-582207) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207 (perms=drwx------)
	I0401 20:53:50.938766   57531 main.go:141] libmachine: (old-k8s-version-582207) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 20:53:50.938777   57531 main.go:141] libmachine: (old-k8s-version-582207) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 20:53:50.938796   57531 main.go:141] libmachine: (old-k8s-version-582207) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 20:53:50.938810   57531 main.go:141] libmachine: (old-k8s-version-582207) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 20:53:50.938819   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:53:50.938831   57531 main.go:141] libmachine: (old-k8s-version-582207) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 20:53:50.938842   57531 main.go:141] libmachine: (old-k8s-version-582207) creating domain...
	I0401 20:53:50.938855   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 20:53:50.938863   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 20:53:50.938871   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | checking permissions on dir: /home/jenkins
	I0401 20:53:50.938878   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | checking permissions on dir: /home
	I0401 20:53:50.938891   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | skipping /home - not owner
	I0401 20:53:50.940057   57531 main.go:141] libmachine: (old-k8s-version-582207) define libvirt domain using xml: 
	I0401 20:53:50.940085   57531 main.go:141] libmachine: (old-k8s-version-582207) <domain type='kvm'>
	I0401 20:53:50.940096   57531 main.go:141] libmachine: (old-k8s-version-582207)   <name>old-k8s-version-582207</name>
	I0401 20:53:50.940107   57531 main.go:141] libmachine: (old-k8s-version-582207)   <memory unit='MiB'>2200</memory>
	I0401 20:53:50.940140   57531 main.go:141] libmachine: (old-k8s-version-582207)   <vcpu>2</vcpu>
	I0401 20:53:50.940164   57531 main.go:141] libmachine: (old-k8s-version-582207)   <features>
	I0401 20:53:50.940175   57531 main.go:141] libmachine: (old-k8s-version-582207)     <acpi/>
	I0401 20:53:50.940210   57531 main.go:141] libmachine: (old-k8s-version-582207)     <apic/>
	I0401 20:53:50.940225   57531 main.go:141] libmachine: (old-k8s-version-582207)     <pae/>
	I0401 20:53:50.940238   57531 main.go:141] libmachine: (old-k8s-version-582207)     
	I0401 20:53:50.940250   57531 main.go:141] libmachine: (old-k8s-version-582207)   </features>
	I0401 20:53:50.940268   57531 main.go:141] libmachine: (old-k8s-version-582207)   <cpu mode='host-passthrough'>
	I0401 20:53:50.940280   57531 main.go:141] libmachine: (old-k8s-version-582207)   
	I0401 20:53:50.940290   57531 main.go:141] libmachine: (old-k8s-version-582207)   </cpu>
	I0401 20:53:50.940300   57531 main.go:141] libmachine: (old-k8s-version-582207)   <os>
	I0401 20:53:50.940314   57531 main.go:141] libmachine: (old-k8s-version-582207)     <type>hvm</type>
	I0401 20:53:50.940324   57531 main.go:141] libmachine: (old-k8s-version-582207)     <boot dev='cdrom'/>
	I0401 20:53:50.940335   57531 main.go:141] libmachine: (old-k8s-version-582207)     <boot dev='hd'/>
	I0401 20:53:50.940344   57531 main.go:141] libmachine: (old-k8s-version-582207)     <bootmenu enable='no'/>
	I0401 20:53:50.940350   57531 main.go:141] libmachine: (old-k8s-version-582207)   </os>
	I0401 20:53:50.940364   57531 main.go:141] libmachine: (old-k8s-version-582207)   <devices>
	I0401 20:53:50.940371   57531 main.go:141] libmachine: (old-k8s-version-582207)     <disk type='file' device='cdrom'>
	I0401 20:53:50.940379   57531 main.go:141] libmachine: (old-k8s-version-582207)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/boot2docker.iso'/>
	I0401 20:53:50.940390   57531 main.go:141] libmachine: (old-k8s-version-582207)       <target dev='hdc' bus='scsi'/>
	I0401 20:53:50.940398   57531 main.go:141] libmachine: (old-k8s-version-582207)       <readonly/>
	I0401 20:53:50.940404   57531 main.go:141] libmachine: (old-k8s-version-582207)     </disk>
	I0401 20:53:50.940416   57531 main.go:141] libmachine: (old-k8s-version-582207)     <disk type='file' device='disk'>
	I0401 20:53:50.940429   57531 main.go:141] libmachine: (old-k8s-version-582207)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 20:53:50.940446   57531 main.go:141] libmachine: (old-k8s-version-582207)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/old-k8s-version-582207.rawdisk'/>
	I0401 20:53:50.940455   57531 main.go:141] libmachine: (old-k8s-version-582207)       <target dev='hda' bus='virtio'/>
	I0401 20:53:50.940484   57531 main.go:141] libmachine: (old-k8s-version-582207)     </disk>
	I0401 20:53:50.940510   57531 main.go:141] libmachine: (old-k8s-version-582207)     <interface type='network'>
	I0401 20:53:50.940525   57531 main.go:141] libmachine: (old-k8s-version-582207)       <source network='mk-old-k8s-version-582207'/>
	I0401 20:53:50.940543   57531 main.go:141] libmachine: (old-k8s-version-582207)       <model type='virtio'/>
	I0401 20:53:50.940557   57531 main.go:141] libmachine: (old-k8s-version-582207)     </interface>
	I0401 20:53:50.940571   57531 main.go:141] libmachine: (old-k8s-version-582207)     <interface type='network'>
	I0401 20:53:50.940597   57531 main.go:141] libmachine: (old-k8s-version-582207)       <source network='default'/>
	I0401 20:53:50.940625   57531 main.go:141] libmachine: (old-k8s-version-582207)       <model type='virtio'/>
	I0401 20:53:50.940637   57531 main.go:141] libmachine: (old-k8s-version-582207)     </interface>
	I0401 20:53:50.940644   57531 main.go:141] libmachine: (old-k8s-version-582207)     <serial type='pty'>
	I0401 20:53:50.940661   57531 main.go:141] libmachine: (old-k8s-version-582207)       <target port='0'/>
	I0401 20:53:50.940677   57531 main.go:141] libmachine: (old-k8s-version-582207)     </serial>
	I0401 20:53:50.940698   57531 main.go:141] libmachine: (old-k8s-version-582207)     <console type='pty'>
	I0401 20:53:50.940714   57531 main.go:141] libmachine: (old-k8s-version-582207)       <target type='serial' port='0'/>
	I0401 20:53:50.940733   57531 main.go:141] libmachine: (old-k8s-version-582207)     </console>
	I0401 20:53:50.940743   57531 main.go:141] libmachine: (old-k8s-version-582207)     <rng model='virtio'>
	I0401 20:53:50.940753   57531 main.go:141] libmachine: (old-k8s-version-582207)       <backend model='random'>/dev/random</backend>
	I0401 20:53:50.940764   57531 main.go:141] libmachine: (old-k8s-version-582207)     </rng>
	I0401 20:53:50.940774   57531 main.go:141] libmachine: (old-k8s-version-582207)     
	I0401 20:53:50.940788   57531 main.go:141] libmachine: (old-k8s-version-582207)     
	I0401 20:53:50.940806   57531 main.go:141] libmachine: (old-k8s-version-582207)   </devices>
	I0401 20:53:50.940816   57531 main.go:141] libmachine: (old-k8s-version-582207) </domain>
	I0401 20:53:50.940827   57531 main.go:141] libmachine: (old-k8s-version-582207) 
	I0401 20:53:51.052319   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:7b:2d:4b in network default
	I0401 20:53:51.053037   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:51.053058   57531 main.go:141] libmachine: (old-k8s-version-582207) starting domain...
	I0401 20:53:51.053071   57531 main.go:141] libmachine: (old-k8s-version-582207) ensuring networks are active...
	I0401 20:53:51.053890   57531 main.go:141] libmachine: (old-k8s-version-582207) Ensuring network default is active
	I0401 20:53:51.054346   57531 main.go:141] libmachine: (old-k8s-version-582207) Ensuring network mk-old-k8s-version-582207 is active
	I0401 20:53:51.054933   57531 main.go:141] libmachine: (old-k8s-version-582207) getting domain XML...
	I0401 20:53:51.055802   57531 main.go:141] libmachine: (old-k8s-version-582207) creating domain...
	I0401 20:53:52.880833   57531 main.go:141] libmachine: (old-k8s-version-582207) waiting for IP...
	I0401 20:53:52.881528   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:52.882042   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:52.882130   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:52.882051   57595 retry.go:31] will retry after 289.957426ms: waiting for domain to come up
	I0401 20:53:53.173941   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:53.174573   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:53.174605   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:53.174551   57595 retry.go:31] will retry after 266.253438ms: waiting for domain to come up
	I0401 20:53:53.442224   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:53.442722   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:53.442750   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:53.442693   57595 retry.go:31] will retry after 446.871696ms: waiting for domain to come up
	I0401 20:53:53.891272   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:53.891802   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:53.891882   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:53.891777   57595 retry.go:31] will retry after 569.252707ms: waiting for domain to come up
	I0401 20:53:54.463407   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:54.464059   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:54.464095   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:54.464013   57595 retry.go:31] will retry after 469.816864ms: waiting for domain to come up
	I0401 20:53:54.935680   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:54.936243   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:54.936283   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:54.936219   57595 retry.go:31] will retry after 921.859425ms: waiting for domain to come up
	I0401 20:53:55.859869   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:55.860473   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:55.860496   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:55.860447   57595 retry.go:31] will retry after 850.782612ms: waiting for domain to come up
	I0401 20:53:56.713119   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:56.713746   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:56.713778   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:56.713713   57595 retry.go:31] will retry after 990.012619ms: waiting for domain to come up
	I0401 20:53:57.704949   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:57.705399   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:57.705429   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:57.705366   57595 retry.go:31] will retry after 1.762914653s: waiting for domain to come up
	I0401 20:53:59.469611   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:53:59.470190   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:53:59.470240   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:53:59.470166   57595 retry.go:31] will retry after 1.534016209s: waiting for domain to come up
	I0401 20:54:01.005304   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:01.005947   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:54:01.005972   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:54:01.005917   57595 retry.go:31] will retry after 2.225054354s: waiting for domain to come up
	I0401 20:54:03.607559   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:03.607980   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:54:03.608002   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:54:03.607969   57595 retry.go:31] will retry after 2.694092078s: waiting for domain to come up
	I0401 20:54:06.303340   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:06.303804   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:54:06.303859   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:54:06.303814   57595 retry.go:31] will retry after 4.53912183s: waiting for domain to come up
	I0401 20:54:10.844055   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:10.844590   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 20:54:10.844632   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 20:54:10.844576   57595 retry.go:31] will retry after 4.477794571s: waiting for domain to come up
	I0401 20:54:15.324284   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.324898   57531 main.go:141] libmachine: (old-k8s-version-582207) found domain IP: 192.168.50.128
	I0401 20:54:15.324929   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has current primary IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.324937   57531 main.go:141] libmachine: (old-k8s-version-582207) reserving static IP address...
	I0401 20:54:15.325642   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-582207", mac: "52:54:00:56:a4:0e", ip: "192.168.50.128"} in network mk-old-k8s-version-582207
	I0401 20:54:15.407729   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | Getting to WaitForSSH function...
	I0401 20:54:15.407762   57531 main.go:141] libmachine: (old-k8s-version-582207) reserved static IP address 192.168.50.128 for domain old-k8s-version-582207
	I0401 20:54:15.407776   57531 main.go:141] libmachine: (old-k8s-version-582207) waiting for SSH...
	I0401 20:54:15.410006   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.410397   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:15.410439   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.410600   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | Using SSH client type: external
	I0401 20:54:15.410622   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa (-rw-------)
	I0401 20:54:15.410658   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 20:54:15.410671   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | About to run SSH command:
	I0401 20:54:15.410782   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | exit 0
	I0401 20:54:15.538279   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | SSH cmd err, output: <nil>: 
	I0401 20:54:15.538556   57531 main.go:141] libmachine: (old-k8s-version-582207) KVM machine creation complete
	I0401 20:54:15.538979   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetConfigRaw
	I0401 20:54:15.539579   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:54:15.539764   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:54:15.539925   57531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 20:54:15.539943   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetState
	I0401 20:54:15.541229   57531 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 20:54:15.541244   57531 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 20:54:15.541252   57531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 20:54:15.541259   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:15.543923   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.544299   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:15.544332   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.544411   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:15.544619   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.544774   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.544923   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:15.545075   57531 main.go:141] libmachine: Using SSH client type: native
	I0401 20:54:15.545278   57531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 20:54:15.545286   57531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 20:54:15.649741   57531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:54:15.649762   57531 main.go:141] libmachine: Detecting the provisioner...
	I0401 20:54:15.649769   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:15.652592   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.652969   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:15.652996   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.653096   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:15.653312   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.653474   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.653605   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:15.653745   57531 main.go:141] libmachine: Using SSH client type: native
	I0401 20:54:15.653959   57531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 20:54:15.653971   57531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 20:54:15.759158   57531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 20:54:15.759227   57531 main.go:141] libmachine: found compatible host: buildroot
	I0401 20:54:15.759242   57531 main.go:141] libmachine: Provisioning with buildroot...
	I0401 20:54:15.759255   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetMachineName
	I0401 20:54:15.759478   57531 buildroot.go:166] provisioning hostname "old-k8s-version-582207"
	I0401 20:54:15.759516   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetMachineName
	I0401 20:54:15.759726   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:15.762308   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.762658   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:15.762688   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.762857   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:15.763011   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.763177   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.763283   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:15.763420   57531 main.go:141] libmachine: Using SSH client type: native
	I0401 20:54:15.763683   57531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 20:54:15.763703   57531 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-582207 && echo "old-k8s-version-582207" | sudo tee /etc/hostname
	I0401 20:54:15.881105   57531 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-582207
	
	I0401 20:54:15.881131   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:15.883842   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.884219   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:15.884249   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:15.884454   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:15.884671   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.884821   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:15.884927   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:15.885044   57531 main.go:141] libmachine: Using SSH client type: native
	I0401 20:54:15.885236   57531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 20:54:15.885252   57531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-582207' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-582207/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-582207' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:54:16.000247   57531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:54:16.000278   57531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 20:54:16.000343   57531 buildroot.go:174] setting up certificates
	I0401 20:54:16.000379   57531 provision.go:84] configureAuth start
	I0401 20:54:16.000398   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetMachineName
	I0401 20:54:16.000690   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 20:54:16.003218   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.003514   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.003546   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.003673   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:16.005739   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.006053   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.006080   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.006196   57531 provision.go:143] copyHostCerts
	I0401 20:54:16.006263   57531 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 20:54:16.006283   57531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 20:54:16.006340   57531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 20:54:16.006437   57531 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 20:54:16.006444   57531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 20:54:16.006462   57531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 20:54:16.006523   57531 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 20:54:16.006530   57531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 20:54:16.006569   57531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 20:54:16.006639   57531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-582207 san=[127.0.0.1 192.168.50.128 localhost minikube old-k8s-version-582207]
	I0401 20:54:16.425317   57531 provision.go:177] copyRemoteCerts
	I0401 20:54:16.425377   57531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:54:16.425400   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:16.428091   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.428406   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.428437   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.428568   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:16.428772   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:16.428934   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:16.429071   57531 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 20:54:16.515215   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:54:16.542115   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:54:16.565355   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:54:16.587242   57531 provision.go:87] duration metric: took 586.846628ms to configureAuth
	I0401 20:54:16.587272   57531 buildroot.go:189] setting minikube options for container-runtime
	I0401 20:54:16.587423   57531 config.go:182] Loaded profile config "old-k8s-version-582207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:54:16.587495   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:16.589777   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.590126   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.590155   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.590333   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:16.590515   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:16.590658   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:16.590796   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:16.590931   57531 main.go:141] libmachine: Using SSH client type: native
	I0401 20:54:16.591179   57531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 20:54:16.591199   57531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:54:16.834878   57531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:54:16.834911   57531 main.go:141] libmachine: Checking connection to Docker...
	I0401 20:54:16.834922   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetURL
	I0401 20:54:16.836171   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | using libvirt version 6000000
	I0401 20:54:16.838482   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.838806   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.838842   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.838970   57531 main.go:141] libmachine: Docker is up and running!
	I0401 20:54:16.838990   57531 main.go:141] libmachine: Reticulating splines...
	I0401 20:54:16.838996   57531 client.go:171] duration metric: took 26.826789273s to LocalClient.Create
	I0401 20:54:16.839018   57531 start.go:167] duration metric: took 26.826863608s to libmachine.API.Create "old-k8s-version-582207"
	I0401 20:54:16.839027   57531 start.go:293] postStartSetup for "old-k8s-version-582207" (driver="kvm2")
	I0401 20:54:16.839035   57531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:54:16.839052   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:54:16.839259   57531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:54:16.839280   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:16.841379   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.841660   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.841689   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.841836   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:16.841983   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:16.842125   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:16.842259   57531 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 20:54:16.924474   57531 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:54:16.928948   57531 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 20:54:16.928970   57531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 20:54:16.929036   57531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 20:54:16.929137   57531 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 20:54:16.929235   57531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:54:16.939115   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:54:16.963270   57531 start.go:296] duration metric: took 124.231976ms for postStartSetup
	I0401 20:54:16.963314   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetConfigRaw
	I0401 20:54:16.963909   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 20:54:16.966518   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.966891   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.966913   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.967128   57531 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/config.json ...
	I0401 20:54:16.967362   57531 start.go:128] duration metric: took 27.082536806s to createHost
	I0401 20:54:16.967385   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:16.969907   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.970296   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:16.970318   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:16.970515   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:16.970823   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:16.970998   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:16.971176   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:16.971330   57531 main.go:141] libmachine: Using SSH client type: native
	I0401 20:54:16.971575   57531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 20:54:16.971588   57531 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 20:54:17.079159   57531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743540857.029752400
	
	I0401 20:54:17.079177   57531 fix.go:216] guest clock: 1743540857.029752400
	I0401 20:54:17.079183   57531 fix.go:229] Guest: 2025-04-01 20:54:17.0297524 +0000 UTC Remote: 2025-04-01 20:54:16.967375309 +0000 UTC m=+34.357688157 (delta=62.377091ms)
	I0401 20:54:17.079224   57531 fix.go:200] guest clock delta is within tolerance: 62.377091ms
	I0401 20:54:17.079241   57531 start.go:83] releasing machines lock for "old-k8s-version-582207", held for 27.194573968s
	I0401 20:54:17.079270   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:54:17.079545   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 20:54:17.082298   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:17.082667   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:17.082697   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:17.082866   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:54:17.083306   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:54:17.083482   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 20:54:17.083557   57531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:54:17.083606   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:17.083707   57531 ssh_runner.go:195] Run: cat /version.json
	I0401 20:54:17.083728   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 20:54:17.086474   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:17.086496   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:17.086905   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:17.086946   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:17.086974   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:17.086988   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:17.087089   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:17.087275   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 20:54:17.087281   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:17.087445   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 20:54:17.087456   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:17.087622   57531 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 20:54:17.087635   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 20:54:17.087739   57531 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 20:54:17.201932   57531 ssh_runner.go:195] Run: systemctl --version
	I0401 20:54:17.211031   57531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:54:17.384536   57531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 20:54:17.393740   57531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 20:54:17.393831   57531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:54:17.418440   57531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 20:54:17.418466   57531 start.go:495] detecting cgroup driver to use...
	I0401 20:54:17.418536   57531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:54:17.437178   57531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:54:17.452453   57531 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:54:17.452526   57531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:54:17.473120   57531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:54:17.487245   57531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:54:17.612539   57531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:54:17.750669   57531 docker.go:233] disabling docker service ...
	I0401 20:54:17.750727   57531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:54:17.766583   57531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:54:17.779338   57531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:54:17.921380   57531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:54:18.038054   57531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:54:18.052485   57531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:54:18.072203   57531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:54:18.072261   57531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:54:18.082885   57531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:54:18.082946   57531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:54:18.094284   57531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:54:18.105309   57531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:54:18.115587   57531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:54:18.125893   57531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:54:18.135459   57531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 20:54:18.135507   57531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 20:54:18.148686   57531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:54:18.158236   57531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:54:18.279390   57531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:54:18.381739   57531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:54:18.381798   57531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:54:18.387443   57531 start.go:563] Will wait 60s for crictl version
	I0401 20:54:18.387507   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:18.391852   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:54:18.437786   57531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 20:54:18.437891   57531 ssh_runner.go:195] Run: crio --version
	I0401 20:54:18.467786   57531 ssh_runner.go:195] Run: crio --version
	I0401 20:54:18.498898   57531 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 20:54:18.500243   57531 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 20:54:18.503752   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:18.504204   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 21:54:07 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 20:54:18.504240   57531 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 20:54:18.504450   57531 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 20:54:18.508854   57531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:54:18.521560   57531 kubeadm.go:883] updating cluster {Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:54:18.521669   57531 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:54:18.521716   57531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:54:18.562365   57531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:54:18.562435   57531 ssh_runner.go:195] Run: which lz4
	I0401 20:54:18.567385   57531 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:54:18.572170   57531 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:54:18.572196   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:54:20.331176   57531 crio.go:462] duration metric: took 1.76383517s to copy over tarball
	I0401 20:54:20.331289   57531 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:54:23.040980   57531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.709658369s)
	I0401 20:54:23.041012   57531 crio.go:469] duration metric: took 2.709795771s to extract the tarball
	I0401 20:54:23.041021   57531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:54:23.088030   57531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:54:23.135182   57531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:54:23.135207   57531 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:54:23.135267   57531 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:54:23.135273   57531 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:54:23.135291   57531 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:54:23.135307   57531 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:54:23.135330   57531 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:54:23.135345   57531 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:54:23.135362   57531 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:54:23.135378   57531 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:54:23.137103   57531 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:54:23.137115   57531 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:54:23.137115   57531 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:54:23.137106   57531 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:54:23.137101   57531 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:54:23.137151   57531 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:54:23.137174   57531 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:54:23.137169   57531 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:54:23.283794   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:54:23.285327   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:54:23.285327   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:54:23.291802   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:54:23.298488   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:54:23.326885   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:54:23.374333   57531 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:54:23.374382   57531 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:54:23.374436   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:23.395740   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:54:23.456885   57531 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:54:23.456920   57531 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:54:23.456934   57531 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:54:23.456938   57531 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:54:23.456980   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:23.456980   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:23.474551   57531 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:54:23.474582   57531 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:54:23.474604   57531 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:54:23.474617   57531 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:54:23.474657   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:23.474659   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:23.493561   57531 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:54:23.493622   57531 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:54:23.493650   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:54:23.493666   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:23.498681   57531 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:54:23.498716   57531 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:54:23.498761   57531 ssh_runner.go:195] Run: which crictl
	I0401 20:54:23.498853   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:54:23.498929   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:54:23.498989   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:54:23.498990   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:54:23.502365   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:54:23.622153   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:54:23.622152   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:54:23.622245   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:54:23.643984   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:54:23.644032   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:54:23.644059   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:54:23.644069   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:54:23.768860   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:54:23.772283   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:54:23.772283   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:54:23.825474   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:54:23.825534   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:54:23.825545   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:54:23.825614   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:54:23.937093   57531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:54:23.937174   57531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:54:23.939790   57531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:54:23.999756   57531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:54:23.999779   57531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:54:23.999851   57531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:54:23.999897   57531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:54:24.018816   57531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:54:24.444534   57531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:54:24.587369   57531 cache_images.go:92] duration metric: took 1.452144217s to LoadCachedImages
	W0401 20:54:24.587457   57531 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 20:54:24.587476   57531 kubeadm.go:934] updating node { 192.168.50.128 8443 v1.20.0 crio true true} ...
	I0401 20:54:24.587592   57531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-582207 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:54:24.587687   57531 ssh_runner.go:195] Run: crio config
	I0401 20:54:24.635558   57531 cni.go:84] Creating CNI manager for ""
	I0401 20:54:24.635584   57531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 20:54:24.635595   57531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:54:24.635611   57531 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-582207 NodeName:old-k8s-version-582207 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:54:24.635728   57531 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-582207"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:54:24.635799   57531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:54:24.647370   57531 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:54:24.647440   57531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:54:24.657719   57531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 20:54:24.677181   57531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:54:24.696540   57531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 20:54:24.715773   57531 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0401 20:54:24.719793   57531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:54:24.733200   57531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:54:24.879500   57531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:54:24.898552   57531 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207 for IP: 192.168.50.128
	I0401 20:54:24.898572   57531 certs.go:194] generating shared ca certs ...
	I0401 20:54:24.898587   57531 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:24.898753   57531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 20:54:24.898792   57531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 20:54:24.898801   57531 certs.go:256] generating profile certs ...
	I0401 20:54:24.898857   57531 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/client.key
	I0401 20:54:24.898878   57531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/client.crt with IP's: []
	I0401 20:54:25.515912   57531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/client.crt ...
	I0401 20:54:25.515942   57531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/client.crt: {Name:mkba9924107dad16f37e314d5e0e7c152b394a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:25.516107   57531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/client.key ...
	I0401 20:54:25.516128   57531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/client.key: {Name:mkee776800c61e72fc98702b71ea50660922b324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:25.516210   57531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key.3532d67c
	I0401 20:54:25.516227   57531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.crt.3532d67c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.128]
	I0401 20:54:25.604387   57531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.crt.3532d67c ...
	I0401 20:54:25.604414   57531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.crt.3532d67c: {Name:mk996d587a7185e1ec61e9e8bd05bee090b6d32b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:25.604566   57531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key.3532d67c ...
	I0401 20:54:25.604577   57531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key.3532d67c: {Name:mk996f4794ad35d65323fbba324e6a6a3e8e9704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:25.604642   57531 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.crt.3532d67c -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.crt
	I0401 20:54:25.604709   57531 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key.3532d67c -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key
	I0401 20:54:25.604760   57531 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.key
	I0401 20:54:25.604774   57531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.crt with IP's: []
	I0401 20:54:25.760615   57531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.crt ...
	I0401 20:54:25.760645   57531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.crt: {Name:mk5bbbe38ec0aca8391a17ef016bf1021d139b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:25.760833   57531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.key ...
	I0401 20:54:25.760869   57531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.key: {Name:mk1975a7ed7f9a78049ea302192551fe88e9c2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:54:25.761073   57531 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 20:54:25.761109   57531 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 20:54:25.761119   57531 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:54:25.761142   57531 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:54:25.761163   57531 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:54:25.761185   57531 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 20:54:25.761220   57531 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 20:54:25.761785   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:54:25.799941   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 20:54:25.824906   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:54:25.872827   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:54:25.906850   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:54:25.933137   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:54:25.959713   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:54:25.985988   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:54:26.015998   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 20:54:26.043052   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:54:26.067840   57531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 20:54:26.093587   57531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:54:26.111608   57531 ssh_runner.go:195] Run: openssl version
	I0401 20:54:26.117642   57531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 20:54:26.128730   57531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 20:54:26.134804   57531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 20:54:26.134869   57531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 20:54:26.140754   57531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:54:26.151557   57531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:54:26.162595   57531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:54:26.167462   57531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:54:26.167515   57531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:54:26.173433   57531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:54:26.184406   57531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 20:54:26.195659   57531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 20:54:26.200206   57531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 20:54:26.200252   57531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 20:54:26.206015   57531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 20:54:26.216926   57531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:54:26.221210   57531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:54:26.221271   57531 kubeadm.go:392] StartCluster: {Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:54:26.221373   57531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:54:26.221425   57531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:54:26.261636   57531 cri.go:89] found id: ""
	I0401 20:54:26.261713   57531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:54:26.275716   57531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:54:26.288021   57531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:54:26.300202   57531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:54:26.300230   57531 kubeadm.go:157] found existing configuration files:
	
	I0401 20:54:26.300291   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:54:26.309293   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:54:26.309354   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:54:26.318785   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:54:26.328833   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:54:26.328902   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:54:26.339041   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:54:26.348452   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:54:26.348525   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:54:26.359868   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:54:26.376655   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:54:26.376744   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:54:26.387014   57531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 20:54:26.720099   57531 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:56:24.929131   57531 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 20:56:24.929298   57531 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 20:56:24.931412   57531 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:56:24.931536   57531 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:56:24.931845   57531 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:56:24.932292   57531 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:56:24.932416   57531 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:56:24.932477   57531 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:56:24.934344   57531 out.go:235]   - Generating certificates and keys ...
	I0401 20:56:24.934440   57531 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:56:24.934517   57531 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:56:24.934618   57531 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:56:24.934704   57531 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:56:24.934792   57531 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:56:24.934874   57531 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:56:24.934953   57531 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:56:24.935102   57531 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-582207] and IPs [192.168.50.128 127.0.0.1 ::1]
	I0401 20:56:24.935187   57531 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:56:24.935332   57531 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-582207] and IPs [192.168.50.128 127.0.0.1 ::1]
	I0401 20:56:24.935415   57531 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:56:24.935499   57531 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:56:24.935593   57531 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:56:24.935687   57531 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:56:24.935768   57531 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:56:24.935848   57531 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:56:24.935942   57531 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:56:24.936020   57531 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:56:24.936139   57531 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:56:24.936249   57531 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:56:24.936315   57531 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:56:24.936405   57531 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:56:24.938646   57531 out.go:235]   - Booting up control plane ...
	I0401 20:56:24.938751   57531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:56:24.938836   57531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:56:24.938928   57531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:56:24.939050   57531 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:56:24.939207   57531 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:56:24.939276   57531 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 20:56:24.939365   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:56:24.939562   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:56:24.939664   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:56:24.939851   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:56:24.939921   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:56:24.940082   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:56:24.940144   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:56:24.940316   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:56:24.940414   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:56:24.940621   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:56:24.940632   57531 kubeadm.go:310] 
	I0401 20:56:24.940681   57531 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 20:56:24.940734   57531 kubeadm.go:310] 		timed out waiting for the condition
	I0401 20:56:24.940744   57531 kubeadm.go:310] 
	I0401 20:56:24.940808   57531 kubeadm.go:310] 	This error is likely caused by:
	I0401 20:56:24.940853   57531 kubeadm.go:310] 		- The kubelet is not running
	I0401 20:56:24.941007   57531 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 20:56:24.941016   57531 kubeadm.go:310] 
	I0401 20:56:24.941135   57531 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 20:56:24.941190   57531 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 20:56:24.941249   57531 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 20:56:24.941291   57531 kubeadm.go:310] 
	I0401 20:56:24.941448   57531 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 20:56:24.941527   57531 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 20:56:24.941537   57531 kubeadm.go:310] 
	I0401 20:56:24.941632   57531 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 20:56:24.941725   57531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 20:56:24.941836   57531 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 20:56:24.941906   57531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 20:56:24.941948   57531 kubeadm.go:310] 
	W0401 20:56:24.942018   57531 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-582207] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-582207] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-582207] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-582207] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 20:56:24.942053   57531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 20:56:25.704127   57531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:56:25.720101   57531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:56:25.732408   57531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:56:25.732428   57531 kubeadm.go:157] found existing configuration files:
	
	I0401 20:56:25.732478   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:56:25.742178   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:56:25.742254   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:56:25.751955   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:56:25.761048   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:56:25.761095   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:56:25.770719   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:56:25.780127   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:56:25.780183   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:56:25.789943   57531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:56:25.799399   57531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:56:25.799477   57531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:56:25.809294   57531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 20:56:25.899039   57531 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:56:25.899092   57531 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:56:26.041549   57531 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:56:26.041709   57531 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:56:26.041822   57531 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:56:26.246757   57531 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:56:26.248679   57531 out.go:235]   - Generating certificates and keys ...
	I0401 20:56:26.248783   57531 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:56:26.248893   57531 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:56:26.249027   57531 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 20:56:26.249118   57531 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0401 20:56:26.249211   57531 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 20:56:26.249295   57531 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0401 20:56:26.249377   57531 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0401 20:56:26.249467   57531 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0401 20:56:26.249574   57531 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 20:56:26.249688   57531 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 20:56:26.249751   57531 kubeadm.go:310] [certs] Using the existing "sa" key
	I0401 20:56:26.249834   57531 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:56:26.577676   57531 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:56:26.690473   57531 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:56:26.807354   57531 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:56:27.040281   57531 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:56:27.056662   57531 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:56:27.056803   57531 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:56:27.056875   57531 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:56:27.192770   57531 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:56:27.194250   57531 out.go:235]   - Booting up control plane ...
	I0401 20:56:27.194388   57531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:56:27.203052   57531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:56:27.204499   57531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:56:27.205575   57531 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:56:27.208713   57531 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:57:07.209670   57531 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 20:57:07.210202   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:57:07.210470   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:57:12.210612   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:57:12.210807   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:57:22.211042   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:57:22.211308   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:57:42.211899   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:57:42.212134   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:58:22.213675   57531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 20:58:22.213989   57531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 20:58:22.214017   57531 kubeadm.go:310] 
	I0401 20:58:22.214081   57531 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 20:58:22.214133   57531 kubeadm.go:310] 		timed out waiting for the condition
	I0401 20:58:22.214147   57531 kubeadm.go:310] 
	I0401 20:58:22.214195   57531 kubeadm.go:310] 	This error is likely caused by:
	I0401 20:58:22.214284   57531 kubeadm.go:310] 		- The kubelet is not running
	I0401 20:58:22.214452   57531 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 20:58:22.214478   57531 kubeadm.go:310] 
	I0401 20:58:22.214649   57531 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 20:58:22.214710   57531 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 20:58:22.214779   57531 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 20:58:22.214791   57531 kubeadm.go:310] 
	I0401 20:58:22.214935   57531 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 20:58:22.215052   57531 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 20:58:22.215061   57531 kubeadm.go:310] 
	I0401 20:58:22.215227   57531 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 20:58:22.215364   57531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 20:58:22.215470   57531 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 20:58:22.215580   57531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 20:58:22.215593   57531 kubeadm.go:310] 
	I0401 20:58:22.216105   57531 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:58:22.216246   57531 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 20:58:22.216350   57531 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 20:58:22.216422   57531 kubeadm.go:394] duration metric: took 3m55.995154233s to StartCluster
	I0401 20:58:22.216463   57531 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 20:58:22.216542   57531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 20:58:22.259163   57531 cri.go:89] found id: ""
	I0401 20:58:22.259192   57531 logs.go:282] 0 containers: []
	W0401 20:58:22.259204   57531 logs.go:284] No container was found matching "kube-apiserver"
	I0401 20:58:22.259212   57531 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 20:58:22.259270   57531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 20:58:22.299593   57531 cri.go:89] found id: ""
	I0401 20:58:22.299626   57531 logs.go:282] 0 containers: []
	W0401 20:58:22.299637   57531 logs.go:284] No container was found matching "etcd"
	I0401 20:58:22.299645   57531 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 20:58:22.299706   57531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 20:58:22.343834   57531 cri.go:89] found id: ""
	I0401 20:58:22.343872   57531 logs.go:282] 0 containers: []
	W0401 20:58:22.343883   57531 logs.go:284] No container was found matching "coredns"
	I0401 20:58:22.343889   57531 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 20:58:22.343954   57531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 20:58:22.378699   57531 cri.go:89] found id: ""
	I0401 20:58:22.378725   57531 logs.go:282] 0 containers: []
	W0401 20:58:22.378736   57531 logs.go:284] No container was found matching "kube-scheduler"
	I0401 20:58:22.378743   57531 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 20:58:22.378822   57531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 20:58:22.419615   57531 cri.go:89] found id: ""
	I0401 20:58:22.419644   57531 logs.go:282] 0 containers: []
	W0401 20:58:22.419655   57531 logs.go:284] No container was found matching "kube-proxy"
	I0401 20:58:22.419662   57531 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 20:58:22.419718   57531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 20:58:22.454416   57531 cri.go:89] found id: ""
	I0401 20:58:22.454441   57531 logs.go:282] 0 containers: []
	W0401 20:58:22.454451   57531 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 20:58:22.454459   57531 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 20:58:22.454511   57531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 20:58:22.488100   57531 cri.go:89] found id: ""
	I0401 20:58:22.488126   57531 logs.go:282] 0 containers: []
	W0401 20:58:22.488136   57531 logs.go:284] No container was found matching "kindnet"
	I0401 20:58:22.488148   57531 logs.go:123] Gathering logs for kubelet ...
	I0401 20:58:22.488163   57531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 20:58:22.540420   57531 logs.go:123] Gathering logs for dmesg ...
	I0401 20:58:22.540458   57531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 20:58:22.555921   57531 logs.go:123] Gathering logs for describe nodes ...
	I0401 20:58:22.555959   57531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 20:58:22.689880   57531 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 20:58:22.689901   57531 logs.go:123] Gathering logs for CRI-O ...
	I0401 20:58:22.689912   57531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 20:58:22.840667   57531 logs.go:123] Gathering logs for container status ...
	I0401 20:58:22.840699   57531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0401 20:58:22.935139   57531 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 20:58:22.935205   57531 out.go:270] * 
	* 
	W0401 20:58:22.935271   57531 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 20:58:22.935295   57531 out.go:270] * 
	* 
	W0401 20:58:22.936338   57531 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:58:22.939789   57531 out.go:201] 
	W0401 20:58:22.941351   57531 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 20:58:22.941411   57531 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 20:58:22.941439   57531 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 20:58:22.943089   57531 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-582207 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 6 (252.009119ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 20:58:23.250425   60820 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-582207" does not appear in /home/jenkins/minikube-integration/20506-9129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-582207" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (280.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-582207 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-582207 create -f testdata/busybox.yaml: exit status 1 (42.924297ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-582207" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-582207 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 6 (237.724899ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 20:58:23.529607   60873 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-582207" does not appear in /home/jenkins/minikube-integration/20506-9129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-582207" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 6 (267.763986ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 20:58:23.800710   60902 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-582207" does not appear in /home/jenkins/minikube-integration/20506-9129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-582207" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-582207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0401 20:59:06.658508   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-582207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.706297021s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-582207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-582207 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-582207 describe deploy/metrics-server -n kube-system: exit status 1 (46.160653ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-582207" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-582207 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 6 (232.581064ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 21:00:10.787058   61365 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-582207" does not appear in /home/jenkins/minikube-integration/20506-9129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-582207" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (513.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-582207 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0401 21:00:29.730243   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:02:27.799559   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-582207 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m32.111619371s)

                                                
                                                
-- stdout --
	* [old-k8s-version-582207] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-582207" primary control-plane node in "old-k8s-version-582207" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-582207" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 21:00:13.343878   61496 out.go:345] Setting OutFile to fd 1 ...
	I0401 21:00:13.344188   61496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:00:13.344201   61496 out.go:358] Setting ErrFile to fd 2...
	I0401 21:00:13.344208   61496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:00:13.344470   61496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 21:00:13.345101   61496 out.go:352] Setting JSON to false
	I0401 21:00:13.346185   61496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6157,"bootTime":1743535056,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 21:00:13.346347   61496 start.go:139] virtualization: kvm guest
	I0401 21:00:13.348568   61496 out.go:177] * [old-k8s-version-582207] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 21:00:13.350441   61496 notify.go:220] Checking for updates...
	I0401 21:00:13.350450   61496 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 21:00:13.352085   61496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 21:00:13.353526   61496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:00:13.354933   61496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:00:13.356225   61496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 21:00:13.357602   61496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 21:00:13.359242   61496 config.go:182] Loaded profile config "old-k8s-version-582207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 21:00:13.359628   61496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:00:13.359684   61496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:00:13.375248   61496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0401 21:00:13.375805   61496 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:00:13.376329   61496 main.go:141] libmachine: Using API Version  1
	I0401 21:00:13.376386   61496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:00:13.376726   61496 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:00:13.376891   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:13.378684   61496 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0401 21:00:13.380056   61496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 21:00:13.380406   61496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:00:13.380446   61496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:00:13.396978   61496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0401 21:00:13.397419   61496 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:00:13.397899   61496 main.go:141] libmachine: Using API Version  1
	I0401 21:00:13.397923   61496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:00:13.398320   61496 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:00:13.398503   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:13.437357   61496 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 21:00:13.438571   61496 start.go:297] selected driver: kvm2
	I0401 21:00:13.438588   61496 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:00:13.438692   61496 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 21:00:13.439385   61496 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:00:13.439455   61496 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 21:00:13.455273   61496 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 21:00:13.455819   61496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:00:13.455865   61496 cni.go:84] Creating CNI manager for ""
	I0401 21:00:13.455925   61496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 21:00:13.455969   61496 start.go:340] cluster config:
	{Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-582207 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:00:13.456102   61496 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:00:13.458835   61496 out.go:177] * Starting "old-k8s-version-582207" primary control-plane node in "old-k8s-version-582207" cluster
	I0401 21:00:13.460245   61496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 21:00:13.460293   61496 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 21:00:13.460303   61496 cache.go:56] Caching tarball of preloaded images
	I0401 21:00:13.460417   61496 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 21:00:13.460430   61496 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 21:00:13.460551   61496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/config.json ...
	I0401 21:00:13.460769   61496 start.go:360] acquireMachinesLock for old-k8s-version-582207: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 21:00:13.460827   61496 start.go:364] duration metric: took 35.572µs to acquireMachinesLock for "old-k8s-version-582207"
	I0401 21:00:13.460861   61496 start.go:96] Skipping create...Using existing machine configuration
	I0401 21:00:13.460872   61496 fix.go:54] fixHost starting: 
	I0401 21:00:13.461185   61496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:00:13.461226   61496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:00:13.477097   61496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I0401 21:00:13.477669   61496 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:00:13.478262   61496 main.go:141] libmachine: Using API Version  1
	I0401 21:00:13.478283   61496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:00:13.478658   61496 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:00:13.478844   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:13.479004   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetState
	I0401 21:00:13.480732   61496 fix.go:112] recreateIfNeeded on old-k8s-version-582207: state=Stopped err=<nil>
	I0401 21:00:13.480751   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	W0401 21:00:13.480882   61496 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 21:00:13.482818   61496 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-582207" ...
	I0401 21:00:13.484022   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .Start
	I0401 21:00:13.484210   61496 main.go:141] libmachine: (old-k8s-version-582207) starting domain...
	I0401 21:00:13.484227   61496 main.go:141] libmachine: (old-k8s-version-582207) ensuring networks are active...
	I0401 21:00:13.484973   61496 main.go:141] libmachine: (old-k8s-version-582207) Ensuring network default is active
	I0401 21:00:13.485360   61496 main.go:141] libmachine: (old-k8s-version-582207) Ensuring network mk-old-k8s-version-582207 is active
	I0401 21:00:13.485722   61496 main.go:141] libmachine: (old-k8s-version-582207) getting domain XML...
	I0401 21:00:13.486438   61496 main.go:141] libmachine: (old-k8s-version-582207) creating domain...
	I0401 21:00:14.776252   61496 main.go:141] libmachine: (old-k8s-version-582207) waiting for IP...
	I0401 21:00:14.777152   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:14.777582   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:14.777670   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:14.777584   61531 retry.go:31] will retry after 299.884374ms: waiting for domain to come up
	I0401 21:00:15.079555   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:15.080189   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:15.080227   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:15.080149   61531 retry.go:31] will retry after 309.93662ms: waiting for domain to come up
	I0401 21:00:15.391844   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:15.392381   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:15.392408   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:15.392347   61531 retry.go:31] will retry after 414.928792ms: waiting for domain to come up
	I0401 21:00:15.809052   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:15.809586   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:15.809608   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:15.809538   61531 retry.go:31] will retry after 569.428842ms: waiting for domain to come up
	I0401 21:00:16.380314   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:16.380915   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:16.380940   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:16.380885   61531 retry.go:31] will retry after 637.257895ms: waiting for domain to come up
	I0401 21:00:17.019827   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:17.020636   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:17.020673   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:17.020554   61531 retry.go:31] will retry after 745.381512ms: waiting for domain to come up
	I0401 21:00:17.767204   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:17.767731   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:17.767767   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:17.767700   61531 retry.go:31] will retry after 1.056561983s: waiting for domain to come up
	I0401 21:00:18.826434   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:18.826942   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:18.826975   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:18.826890   61531 retry.go:31] will retry after 1.007396233s: waiting for domain to come up
	I0401 21:00:19.835548   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:19.836149   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:19.836187   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:19.835979   61531 retry.go:31] will retry after 1.413651563s: waiting for domain to come up
	I0401 21:00:21.251724   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:21.252240   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:21.252262   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:21.252210   61531 retry.go:31] will retry after 1.725199337s: waiting for domain to come up
	I0401 21:00:22.980032   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:22.980780   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:22.980812   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:22.980736   61531 retry.go:31] will retry after 2.322319216s: waiting for domain to come up
	I0401 21:00:25.304582   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:25.305179   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:25.305236   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:25.305154   61531 retry.go:31] will retry after 2.547861622s: waiting for domain to come up
	I0401 21:00:27.855254   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:27.855697   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | unable to find current IP address of domain old-k8s-version-582207 in network mk-old-k8s-version-582207
	I0401 21:00:27.855729   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | I0401 21:00:27.855666   61531 retry.go:31] will retry after 4.524490952s: waiting for domain to come up
	I0401 21:00:32.383251   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.383855   61496 main.go:141] libmachine: (old-k8s-version-582207) found domain IP: 192.168.50.128
	I0401 21:00:32.383896   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has current primary IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.383908   61496 main.go:141] libmachine: (old-k8s-version-582207) reserving static IP address...
	I0401 21:00:32.384306   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "old-k8s-version-582207", mac: "52:54:00:56:a4:0e", ip: "192.168.50.128"} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.384347   61496 main.go:141] libmachine: (old-k8s-version-582207) reserved static IP address 192.168.50.128 for domain old-k8s-version-582207
	I0401 21:00:32.384368   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | skip adding static IP to network mk-old-k8s-version-582207 - found existing host DHCP lease matching {name: "old-k8s-version-582207", mac: "52:54:00:56:a4:0e", ip: "192.168.50.128"}
	I0401 21:00:32.384393   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | Getting to WaitForSSH function...
	I0401 21:00:32.384409   61496 main.go:141] libmachine: (old-k8s-version-582207) waiting for SSH...
	I0401 21:00:32.386894   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.387253   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.387282   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.387451   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | Using SSH client type: external
	I0401 21:00:32.387484   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa (-rw-------)
	I0401 21:00:32.387540   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 21:00:32.387573   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | About to run SSH command:
	I0401 21:00:32.387595   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | exit 0
	I0401 21:00:32.515281   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | SSH cmd err, output: <nil>: 
	I0401 21:00:32.515654   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetConfigRaw
	I0401 21:00:32.516277   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 21:00:32.518537   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.518894   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.518935   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.519164   61496 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/config.json ...
	I0401 21:00:32.519421   61496 machine.go:93] provisionDockerMachine start ...
	I0401 21:00:32.519445   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:32.519671   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:32.522258   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.522678   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.522708   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.522957   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:32.523167   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:32.523278   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:32.523376   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:32.523481   61496 main.go:141] libmachine: Using SSH client type: native
	I0401 21:00:32.523689   61496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 21:00:32.523701   61496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 21:00:32.634924   61496 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 21:00:32.634956   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetMachineName
	I0401 21:00:32.635209   61496 buildroot.go:166] provisioning hostname "old-k8s-version-582207"
	I0401 21:00:32.635234   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetMachineName
	I0401 21:00:32.635422   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:32.638479   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.638870   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.638913   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.639041   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:32.639292   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:32.639599   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:32.639758   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:32.639912   61496 main.go:141] libmachine: Using SSH client type: native
	I0401 21:00:32.640104   61496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 21:00:32.640115   61496 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-582207 && echo "old-k8s-version-582207" | sudo tee /etc/hostname
	I0401 21:00:32.772329   61496 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-582207
	
	I0401 21:00:32.772357   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:32.775403   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.776049   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.776114   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.776292   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:32.776524   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:32.776753   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:32.776951   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:32.777154   61496 main.go:141] libmachine: Using SSH client type: native
	I0401 21:00:32.777381   61496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 21:00:32.777400   61496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-582207' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-582207/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-582207' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 21:00:32.901358   61496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:00:32.901410   61496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 21:00:32.901438   61496 buildroot.go:174] setting up certificates
	I0401 21:00:32.901450   61496 provision.go:84] configureAuth start
	I0401 21:00:32.901462   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetMachineName
	I0401 21:00:32.901768   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 21:00:32.904773   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.905177   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.905207   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.905350   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:32.907874   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.908250   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:32.908297   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:32.908487   61496 provision.go:143] copyHostCerts
	I0401 21:00:32.908556   61496 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 21:00:32.908583   61496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 21:00:32.908668   61496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 21:00:32.908846   61496 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 21:00:32.908858   61496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 21:00:32.908899   61496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 21:00:32.909001   61496 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 21:00:32.909012   61496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 21:00:32.909046   61496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 21:00:32.909138   61496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-582207 san=[127.0.0.1 192.168.50.128 localhost minikube old-k8s-version-582207]
	I0401 21:00:33.564990   61496 provision.go:177] copyRemoteCerts
	I0401 21:00:33.565084   61496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 21:00:33.565114   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:33.568158   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:33.568576   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:33.568606   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:33.568877   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:33.569073   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:33.569226   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:33.569371   61496 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 21:00:33.659462   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 21:00:33.687731   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 21:00:33.716066   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 21:00:33.746789   61496 provision.go:87] duration metric: took 845.300373ms to configureAuth
	I0401 21:00:33.746819   61496 buildroot.go:189] setting minikube options for container-runtime
	I0401 21:00:33.747008   61496 config.go:182] Loaded profile config "old-k8s-version-582207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 21:00:33.747091   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:33.749735   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:33.750083   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:33.750117   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:33.750249   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:33.750528   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:33.750691   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:33.750814   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:33.751015   61496 main.go:141] libmachine: Using SSH client type: native
	I0401 21:00:33.751244   61496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 21:00:33.751263   61496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 21:00:34.007544   61496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 21:00:34.007573   61496 machine.go:96] duration metric: took 1.488136008s to provisionDockerMachine
	I0401 21:00:34.007585   61496 start.go:293] postStartSetup for "old-k8s-version-582207" (driver="kvm2")
	I0401 21:00:34.007599   61496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 21:00:34.007619   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:34.007939   61496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 21:00:34.007986   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:34.010781   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.011126   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:34.011154   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.011335   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:34.011610   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:34.011824   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:34.011995   61496 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 21:00:34.105618   61496 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 21:00:34.111721   61496 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 21:00:34.111765   61496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 21:00:34.111854   61496 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 21:00:34.111932   61496 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 21:00:34.112015   61496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 21:00:34.122724   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:00:34.151110   61496 start.go:296] duration metric: took 143.510954ms for postStartSetup
	I0401 21:00:34.151147   61496 fix.go:56] duration metric: took 20.69027586s for fixHost
	I0401 21:00:34.151166   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:34.153862   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.154291   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:34.154324   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.154506   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:34.154708   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:34.154885   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:34.155027   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:34.155182   61496 main.go:141] libmachine: Using SSH client type: native
	I0401 21:00:34.155409   61496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0401 21:00:34.155422   61496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 21:00:34.271526   61496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743541234.244506863
	
	I0401 21:00:34.271554   61496 fix.go:216] guest clock: 1743541234.244506863
	I0401 21:00:34.271571   61496 fix.go:229] Guest: 2025-04-01 21:00:34.244506863 +0000 UTC Remote: 2025-04-01 21:00:34.151151158 +0000 UTC m=+20.845383608 (delta=93.355705ms)
	I0401 21:00:34.271598   61496 fix.go:200] guest clock delta is within tolerance: 93.355705ms
	I0401 21:00:34.271604   61496 start.go:83] releasing machines lock for "old-k8s-version-582207", held for 20.810763306s
	I0401 21:00:34.271637   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:34.271918   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 21:00:34.274922   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.275302   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:34.275355   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.275467   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:34.275995   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:34.276225   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .DriverName
	I0401 21:00:34.276337   61496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 21:00:34.276388   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:34.276510   61496 ssh_runner.go:195] Run: cat /version.json
	I0401 21:00:34.276536   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHHostname
	I0401 21:00:34.279212   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.279535   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.279609   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:34.279634   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.279759   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:34.279987   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:34.280078   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:34.280104   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:34.280142   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:34.280234   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHPort
	I0401 21:00:34.280323   61496 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 21:00:34.280378   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHKeyPath
	I0401 21:00:34.280497   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetSSHUsername
	I0401 21:00:34.280658   61496 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/old-k8s-version-582207/id_rsa Username:docker}
	I0401 21:00:34.360369   61496 ssh_runner.go:195] Run: systemctl --version
	I0401 21:00:34.385697   61496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 21:00:34.540534   61496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 21:00:34.547106   61496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 21:00:34.547169   61496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 21:00:34.565458   61496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 21:00:34.565487   61496 start.go:495] detecting cgroup driver to use...
	I0401 21:00:34.565610   61496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 21:00:34.583431   61496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 21:00:34.598464   61496 docker.go:217] disabling cri-docker service (if available) ...
	I0401 21:00:34.598517   61496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 21:00:34.614865   61496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 21:00:34.630447   61496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 21:00:34.758026   61496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 21:00:34.919543   61496 docker.go:233] disabling docker service ...
	I0401 21:00:34.919617   61496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 21:00:34.935736   61496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 21:00:34.950506   61496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 21:00:35.119756   61496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 21:00:35.261987   61496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 21:00:35.277092   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 21:00:35.297156   61496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 21:00:35.297212   61496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:00:35.308462   61496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 21:00:35.308539   61496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:00:35.319943   61496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:00:35.331189   61496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:00:35.343652   61496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 21:00:35.355984   61496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 21:00:35.366403   61496 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 21:00:35.366490   61496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 21:00:35.380590   61496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 21:00:35.390877   61496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:00:35.531128   61496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 21:00:35.627460   61496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 21:00:35.627538   61496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 21:00:35.632680   61496 start.go:563] Will wait 60s for crictl version
	I0401 21:00:35.632746   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:35.637044   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 21:00:35.680040   61496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 21:00:35.680136   61496 ssh_runner.go:195] Run: crio --version
	I0401 21:00:35.709883   61496 ssh_runner.go:195] Run: crio --version
	I0401 21:00:35.743331   61496 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 21:00:35.744733   61496 main.go:141] libmachine: (old-k8s-version-582207) Calling .GetIP
	I0401 21:00:35.747723   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:35.748123   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a4:0e", ip: ""} in network mk-old-k8s-version-582207: {Iface:virbr3 ExpiryTime:2025-04-01 22:00:25 +0000 UTC Type:0 Mac:52:54:00:56:a4:0e Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:old-k8s-version-582207 Clientid:01:52:54:00:56:a4:0e}
	I0401 21:00:35.748140   61496 main.go:141] libmachine: (old-k8s-version-582207) DBG | domain old-k8s-version-582207 has defined IP address 192.168.50.128 and MAC address 52:54:00:56:a4:0e in network mk-old-k8s-version-582207
	I0401 21:00:35.748371   61496 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 21:00:35.752657   61496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:00:35.766656   61496 kubeadm.go:883] updating cluster {Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 21:00:35.766765   61496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 21:00:35.766812   61496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:00:35.816037   61496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 21:00:35.816112   61496 ssh_runner.go:195] Run: which lz4
	I0401 21:00:35.820153   61496 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 21:00:35.824830   61496 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 21:00:35.824859   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 21:00:37.625820   61496 crio.go:462] duration metric: took 1.805687747s to copy over tarball
	I0401 21:00:37.625897   61496 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 21:00:40.930019   61496 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.304086336s)
	I0401 21:00:40.930053   61496 crio.go:469] duration metric: took 3.304199583s to extract the tarball
	I0401 21:00:40.930063   61496 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 21:00:40.975827   61496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:00:41.016897   61496 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 21:00:41.016923   61496 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 21:00:41.016989   61496 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:00:41.017028   61496 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 21:00:41.017039   61496 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 21:00:41.017063   61496 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 21:00:41.017088   61496 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 21:00:41.017095   61496 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 21:00:41.017102   61496 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 21:00:41.017011   61496 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 21:00:41.018403   61496 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 21:00:41.018565   61496 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 21:00:41.018575   61496 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:00:41.018576   61496 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 21:00:41.018633   61496 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 21:00:41.018643   61496 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 21:00:41.018643   61496 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 21:00:41.018567   61496 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 21:00:41.151848   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 21:00:41.158784   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 21:00:41.165620   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 21:00:41.170315   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 21:00:41.172019   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 21:00:41.173306   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 21:00:41.174899   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 21:00:41.263290   61496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 21:00:41.263339   61496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 21:00:41.263393   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:41.284806   61496 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 21:00:41.284851   61496 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 21:00:41.284892   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:41.358891   61496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 21:00:41.358941   61496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 21:00:41.359040   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:41.362665   61496 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 21:00:41.362705   61496 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 21:00:41.362749   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:41.362745   61496 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 21:00:41.362837   61496 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 21:00:41.362894   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:41.364134   61496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 21:00:41.364166   61496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 21:00:41.364196   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:41.364200   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 21:00:41.364140   61496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 21:00:41.364243   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 21:00:41.364261   61496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 21:00:41.364303   61496 ssh_runner.go:195] Run: which crictl
	I0401 21:00:41.370691   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 21:00:41.372140   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 21:00:41.372176   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 21:00:41.384753   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 21:00:41.516402   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 21:00:41.516519   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 21:00:41.516520   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 21:00:41.516520   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 21:00:41.528616   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 21:00:41.528740   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 21:00:41.548959   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 21:00:41.686158   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 21:00:41.693612   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 21:00:41.693654   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 21:00:41.698426   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 21:00:41.698440   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 21:00:41.698507   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 21:00:41.720693   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 21:00:41.822994   61496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 21:00:41.849458   61496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 21:00:41.859166   61496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 21:00:41.859609   61496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 21:00:41.876343   61496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 21:00:41.876358   61496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 21:00:41.876345   61496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 21:00:41.916671   61496 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 21:00:42.297864   61496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:00:42.450890   61496 cache_images.go:92] duration metric: took 1.43395322s to LoadCachedImages
	W0401 21:00:42.450972   61496 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-9129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0401 21:00:42.450989   61496 kubeadm.go:934] updating node { 192.168.50.128 8443 v1.20.0 crio true true} ...
	I0401 21:00:42.451086   61496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-582207 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 21:00:42.451166   61496 ssh_runner.go:195] Run: crio config
	I0401 21:00:42.511180   61496 cni.go:84] Creating CNI manager for ""
	I0401 21:00:42.511230   61496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 21:00:42.511252   61496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 21:00:42.511278   61496 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-582207 NodeName:old-k8s-version-582207 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 21:00:42.511441   61496 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-582207"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 21:00:42.511516   61496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 21:00:42.522855   61496 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 21:00:42.522934   61496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 21:00:42.535434   61496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 21:00:42.555669   61496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 21:00:42.576127   61496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 21:00:42.595421   61496 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0401 21:00:42.599637   61496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:00:42.614460   61496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:00:42.760118   61496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:00:42.780831   61496 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207 for IP: 192.168.50.128
	I0401 21:00:42.780860   61496 certs.go:194] generating shared ca certs ...
	I0401 21:00:42.780881   61496 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:00:42.781068   61496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 21:00:42.781121   61496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 21:00:42.781134   61496 certs.go:256] generating profile certs ...
	I0401 21:00:42.781267   61496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/client.key
	I0401 21:00:42.781328   61496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key.3532d67c
	I0401 21:00:42.781379   61496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.key
	I0401 21:00:42.781535   61496 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 21:00:42.781582   61496 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 21:00:42.781596   61496 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 21:00:42.781632   61496 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 21:00:42.781663   61496 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 21:00:42.781696   61496 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 21:00:42.781756   61496 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:00:42.782531   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 21:00:42.825765   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 21:00:42.863421   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 21:00:42.895641   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 21:00:42.931964   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 21:00:42.963604   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 21:00:42.997702   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 21:00:43.031365   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/old-k8s-version-582207/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 21:00:43.068433   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 21:00:43.097768   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 21:00:43.127093   61496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 21:00:43.155001   61496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 21:00:43.175286   61496 ssh_runner.go:195] Run: openssl version
	I0401 21:00:43.181775   61496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 21:00:43.193537   61496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:00:43.198447   61496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:00:43.198512   61496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:00:43.204915   61496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 21:00:43.217232   61496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 21:00:43.229996   61496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 21:00:43.234787   61496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 21:00:43.234866   61496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 21:00:43.241379   61496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 21:00:43.253902   61496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 21:00:43.266054   61496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 21:00:43.271336   61496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 21:00:43.271406   61496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 21:00:43.277722   61496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 21:00:43.290794   61496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 21:00:43.295459   61496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 21:00:43.301695   61496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 21:00:43.307969   61496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 21:00:43.314685   61496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 21:00:43.321177   61496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 21:00:43.328264   61496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 21:00:43.334961   61496 kubeadm.go:392] StartCluster: {Name:old-k8s-version-582207 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-582207 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:00:43.335055   61496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 21:00:43.335110   61496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:00:43.378108   61496 cri.go:89] found id: ""
	I0401 21:00:43.378176   61496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 21:00:43.390845   61496 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 21:00:43.390914   61496 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 21:00:43.390987   61496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 21:00:43.402705   61496 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 21:00:43.403332   61496 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-582207" does not appear in /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:00:43.403744   61496 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-9129/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-582207" cluster setting kubeconfig missing "old-k8s-version-582207" context setting]
	I0401 21:00:43.404460   61496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:00:43.406301   61496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 21:00:43.418924   61496 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I0401 21:00:43.418956   61496 kubeadm.go:1160] stopping kube-system containers ...
	I0401 21:00:43.418965   61496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 21:00:43.419009   61496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:00:43.463205   61496 cri.go:89] found id: ""
	I0401 21:00:43.463287   61496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 21:00:43.482490   61496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:00:43.494543   61496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:00:43.494560   61496 kubeadm.go:157] found existing configuration files:
	
	I0401 21:00:43.494601   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:00:43.505975   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:00:43.506046   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:00:43.517762   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:00:43.528919   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:00:43.528994   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:00:43.539766   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:00:43.550350   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:00:43.550411   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:00:43.561861   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:00:43.573462   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:00:43.573516   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:00:43.584077   61496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:00:43.594900   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 21:00:43.729120   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 21:00:44.306238   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 21:00:44.576326   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 21:00:44.703625   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 21:00:44.794853   61496 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:00:44.794935   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:45.295401   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:45.795346   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:46.295461   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:46.795053   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:47.295158   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:47.795681   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:48.295002   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:48.795933   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:49.295132   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:49.795127   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:50.295035   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:50.795476   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:51.295249   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:51.795494   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:52.296028   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:52.795032   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:53.295454   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:53.795257   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:54.295446   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:54.795241   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:55.295452   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:55.795608   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:56.295470   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:56.796050   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:57.295461   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:57.795843   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:58.295262   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:58.795787   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:59.295112   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:00:59.795053   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:00.295147   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:00.796016   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:01.295551   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:01.795070   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:02.295676   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:02.795967   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:03.295435   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:03.795126   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:04.295480   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:04.795420   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:05.295391   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:05.796044   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:06.295505   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:06.795656   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:07.295414   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:07.795454   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:08.295124   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:08.795503   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:09.295469   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:09.795420   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:10.295444   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:10.795105   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:11.295803   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:11.796059   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:12.295738   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:12.795124   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:13.295964   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:13.795067   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:14.295881   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:14.795453   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:15.295674   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:15.795316   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:16.295430   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:16.795449   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:17.294996   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:17.795078   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:18.295479   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:18.795445   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:19.295368   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:19.795943   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:20.295091   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:20.795501   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:21.295910   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:21.795406   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:22.295548   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:22.795471   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:23.295419   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:23.795857   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:24.295223   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:24.795048   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:25.295801   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:25.795429   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:26.295460   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:26.795106   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:27.295451   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:27.795574   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:28.295839   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:28.795040   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:29.295553   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:29.795084   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:30.295854   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:30.795485   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:31.295407   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:31.795418   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:32.295090   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:32.795414   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:33.295497   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:33.795953   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:34.296005   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:34.795462   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:35.295465   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:35.795413   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:36.295527   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:36.795097   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:37.295220   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:37.795586   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:38.295254   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:38.795939   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:39.295434   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:39.795226   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:40.295426   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:40.795666   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:41.295457   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:41.795413   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:42.295449   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:42.795741   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:43.295912   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:43.795848   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:44.295308   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:44.795298   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:01:44.795375   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:01:44.837403   61496 cri.go:89] found id: ""
	I0401 21:01:44.837431   61496 logs.go:282] 0 containers: []
	W0401 21:01:44.837438   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:01:44.837444   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:01:44.837504   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:01:44.876819   61496 cri.go:89] found id: ""
	I0401 21:01:44.876847   61496 logs.go:282] 0 containers: []
	W0401 21:01:44.876868   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:01:44.876874   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:01:44.876941   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:01:44.919909   61496 cri.go:89] found id: ""
	I0401 21:01:44.919934   61496 logs.go:282] 0 containers: []
	W0401 21:01:44.919942   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:01:44.919949   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:01:44.920006   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:01:44.961977   61496 cri.go:89] found id: ""
	I0401 21:01:44.962000   61496 logs.go:282] 0 containers: []
	W0401 21:01:44.962007   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:01:44.962013   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:01:44.962068   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:01:45.014100   61496 cri.go:89] found id: ""
	I0401 21:01:45.014135   61496 logs.go:282] 0 containers: []
	W0401 21:01:45.014144   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:01:45.014149   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:01:45.014253   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:01:45.050554   61496 cri.go:89] found id: ""
	I0401 21:01:45.050578   61496 logs.go:282] 0 containers: []
	W0401 21:01:45.050586   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:01:45.050591   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:01:45.050644   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:01:45.094362   61496 cri.go:89] found id: ""
	I0401 21:01:45.094385   61496 logs.go:282] 0 containers: []
	W0401 21:01:45.094395   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:01:45.094402   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:01:45.094462   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:01:45.142527   61496 cri.go:89] found id: ""
	I0401 21:01:45.142556   61496 logs.go:282] 0 containers: []
	W0401 21:01:45.142565   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:01:45.142573   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:01:45.142582   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:01:45.157411   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:01:45.157448   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:01:45.285111   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:01:45.285133   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:01:45.285150   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:01:45.364724   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:01:45.364758   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:01:45.410821   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:01:45.410848   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:01:47.966985   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:47.981208   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:01:47.981277   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:01:48.019198   61496 cri.go:89] found id: ""
	I0401 21:01:48.019225   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.019232   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:01:48.019238   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:01:48.019299   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:01:48.054106   61496 cri.go:89] found id: ""
	I0401 21:01:48.054137   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.054147   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:01:48.054153   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:01:48.054232   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:01:48.091101   61496 cri.go:89] found id: ""
	I0401 21:01:48.091133   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.091143   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:01:48.091149   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:01:48.091215   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:01:48.130621   61496 cri.go:89] found id: ""
	I0401 21:01:48.130650   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.130659   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:01:48.130666   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:01:48.130725   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:01:48.167800   61496 cri.go:89] found id: ""
	I0401 21:01:48.167829   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.167837   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:01:48.167842   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:01:48.167924   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:01:48.203760   61496 cri.go:89] found id: ""
	I0401 21:01:48.203785   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.203802   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:01:48.203808   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:01:48.203894   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:01:48.246737   61496 cri.go:89] found id: ""
	I0401 21:01:48.246771   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.246782   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:01:48.246789   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:01:48.246854   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:01:48.284352   61496 cri.go:89] found id: ""
	I0401 21:01:48.284375   61496 logs.go:282] 0 containers: []
	W0401 21:01:48.284383   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:01:48.284396   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:01:48.284405   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:01:48.365685   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:01:48.365725   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:01:48.415941   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:01:48.415978   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:01:48.468385   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:01:48.468419   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:01:48.482798   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:01:48.482824   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:01:48.556904   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:01:51.058351   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:51.074314   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:01:51.074382   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:01:51.109507   61496 cri.go:89] found id: ""
	I0401 21:01:51.109549   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.109559   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:01:51.109567   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:01:51.109631   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:01:51.146356   61496 cri.go:89] found id: ""
	I0401 21:01:51.146384   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.146391   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:01:51.146399   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:01:51.146451   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:01:51.183669   61496 cri.go:89] found id: ""
	I0401 21:01:51.183706   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.183716   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:01:51.183722   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:01:51.183785   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:01:51.230350   61496 cri.go:89] found id: ""
	I0401 21:01:51.230380   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.230392   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:01:51.230399   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:01:51.230459   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:01:51.269418   61496 cri.go:89] found id: ""
	I0401 21:01:51.269449   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.269459   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:01:51.269465   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:01:51.269524   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:01:51.307431   61496 cri.go:89] found id: ""
	I0401 21:01:51.307461   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.307473   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:01:51.307480   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:01:51.307540   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:01:51.342664   61496 cri.go:89] found id: ""
	I0401 21:01:51.342697   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.342707   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:01:51.342715   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:01:51.342782   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:01:51.381850   61496 cri.go:89] found id: ""
	I0401 21:01:51.381884   61496 logs.go:282] 0 containers: []
	W0401 21:01:51.381895   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:01:51.381906   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:01:51.381920   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:01:51.442501   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:01:51.442540   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:01:51.457310   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:01:51.457342   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:01:51.529537   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:01:51.529563   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:01:51.529588   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:01:51.615348   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:01:51.615384   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:01:54.160224   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:54.176048   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:01:54.176125   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:01:54.211452   61496 cri.go:89] found id: ""
	I0401 21:01:54.211476   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.211486   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:01:54.211493   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:01:54.211560   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:01:54.251802   61496 cri.go:89] found id: ""
	I0401 21:01:54.251831   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.251842   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:01:54.251849   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:01:54.251912   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:01:54.292085   61496 cri.go:89] found id: ""
	I0401 21:01:54.292116   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.292126   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:01:54.292132   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:01:54.292191   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:01:54.329615   61496 cri.go:89] found id: ""
	I0401 21:01:54.329636   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.329644   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:01:54.329649   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:01:54.329719   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:01:54.367554   61496 cri.go:89] found id: ""
	I0401 21:01:54.367584   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.367594   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:01:54.367601   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:01:54.367727   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:01:54.424606   61496 cri.go:89] found id: ""
	I0401 21:01:54.424629   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.424636   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:01:54.424642   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:01:54.424698   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:01:54.463369   61496 cri.go:89] found id: ""
	I0401 21:01:54.463391   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.463403   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:01:54.463408   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:01:54.463461   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:01:54.502414   61496 cri.go:89] found id: ""
	I0401 21:01:54.502446   61496 logs.go:282] 0 containers: []
	W0401 21:01:54.502457   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:01:54.502469   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:01:54.502484   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:01:54.556333   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:01:54.556368   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:01:54.573974   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:01:54.574004   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:01:54.657541   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:01:54.657566   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:01:54.657580   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:01:54.737652   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:01:54.737685   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:01:57.279182   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:01:57.293191   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:01:57.293254   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:01:57.332039   61496 cri.go:89] found id: ""
	I0401 21:01:57.332065   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.332074   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:01:57.332079   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:01:57.332144   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:01:57.368690   61496 cri.go:89] found id: ""
	I0401 21:01:57.368713   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.368720   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:01:57.368725   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:01:57.368781   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:01:57.406638   61496 cri.go:89] found id: ""
	I0401 21:01:57.406664   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.406672   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:01:57.406678   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:01:57.406730   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:01:57.444694   61496 cri.go:89] found id: ""
	I0401 21:01:57.444720   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.444732   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:01:57.444740   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:01:57.444798   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:01:57.481135   61496 cri.go:89] found id: ""
	I0401 21:01:57.481166   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.481176   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:01:57.481184   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:01:57.481242   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:01:57.521369   61496 cri.go:89] found id: ""
	I0401 21:01:57.521399   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.521410   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:01:57.521417   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:01:57.521478   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:01:57.560518   61496 cri.go:89] found id: ""
	I0401 21:01:57.560540   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.560546   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:01:57.560551   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:01:57.560605   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:01:57.607552   61496 cri.go:89] found id: ""
	I0401 21:01:57.607576   61496 logs.go:282] 0 containers: []
	W0401 21:01:57.607583   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:01:57.607591   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:01:57.607603   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:01:57.687035   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:01:57.687054   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:01:57.687065   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:01:57.776739   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:01:57.776832   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:01:57.821740   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:01:57.821775   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:01:57.877581   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:01:57.877616   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:00.394345   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:00.410601   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:00.410670   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:00.448568   61496 cri.go:89] found id: ""
	I0401 21:02:00.448607   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.448619   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:00.448629   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:00.448693   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:00.489082   61496 cri.go:89] found id: ""
	I0401 21:02:00.489112   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.489123   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:00.489130   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:00.489183   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:00.528589   61496 cri.go:89] found id: ""
	I0401 21:02:00.528621   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.528633   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:00.528640   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:00.528700   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:00.566784   61496 cri.go:89] found id: ""
	I0401 21:02:00.566815   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.566825   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:00.566833   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:00.566892   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:00.605676   61496 cri.go:89] found id: ""
	I0401 21:02:00.605707   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.605718   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:00.605725   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:00.605787   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:00.642944   61496 cri.go:89] found id: ""
	I0401 21:02:00.642972   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.642984   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:00.642993   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:00.643057   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:00.680643   61496 cri.go:89] found id: ""
	I0401 21:02:00.680671   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.680681   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:00.680689   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:00.680751   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:00.724638   61496 cri.go:89] found id: ""
	I0401 21:02:00.724668   61496 logs.go:282] 0 containers: []
	W0401 21:02:00.724680   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:00.724692   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:00.724708   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:00.739842   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:00.739883   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:00.841017   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:00.841045   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:00.841059   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:00.927327   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:00.927361   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:00.971744   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:00.971773   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:03.525190   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:03.539338   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:03.539408   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:03.577302   61496 cri.go:89] found id: ""
	I0401 21:02:03.577329   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.577339   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:03.577346   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:03.577406   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:03.614510   61496 cri.go:89] found id: ""
	I0401 21:02:03.614538   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.614550   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:03.614556   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:03.614617   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:03.651713   61496 cri.go:89] found id: ""
	I0401 21:02:03.651737   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.651744   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:03.651749   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:03.651800   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:03.689080   61496 cri.go:89] found id: ""
	I0401 21:02:03.689103   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.689110   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:03.689115   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:03.689156   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:03.725285   61496 cri.go:89] found id: ""
	I0401 21:02:03.725308   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.725315   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:03.725319   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:03.725370   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:03.763110   61496 cri.go:89] found id: ""
	I0401 21:02:03.763132   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.763140   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:03.763145   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:03.763191   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:03.799721   61496 cri.go:89] found id: ""
	I0401 21:02:03.799745   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.799756   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:03.799763   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:03.799824   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:03.835710   61496 cri.go:89] found id: ""
	I0401 21:02:03.835734   61496 logs.go:282] 0 containers: []
	W0401 21:02:03.835741   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:03.835755   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:03.835768   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:03.890953   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:03.890984   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:03.904829   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:03.904858   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:03.982235   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:03.982280   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:03.982296   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:04.072355   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:04.072398   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:06.614359   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:06.630975   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:06.631036   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:06.671045   61496 cri.go:89] found id: ""
	I0401 21:02:06.671081   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.671095   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:06.671103   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:06.671174   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:06.705390   61496 cri.go:89] found id: ""
	I0401 21:02:06.705411   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.705418   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:06.705423   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:06.705470   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:06.746189   61496 cri.go:89] found id: ""
	I0401 21:02:06.746227   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.746239   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:06.746246   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:06.746312   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:06.789223   61496 cri.go:89] found id: ""
	I0401 21:02:06.789245   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.789253   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:06.789261   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:06.789316   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:06.826813   61496 cri.go:89] found id: ""
	I0401 21:02:06.826843   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.826854   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:06.826861   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:06.826920   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:06.863296   61496 cri.go:89] found id: ""
	I0401 21:02:06.863321   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.863337   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:06.863345   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:06.863404   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:06.897867   61496 cri.go:89] found id: ""
	I0401 21:02:06.897896   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.897905   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:06.897911   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:06.897958   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:06.937236   61496 cri.go:89] found id: ""
	I0401 21:02:06.937269   61496 logs.go:282] 0 containers: []
	W0401 21:02:06.937281   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:06.937291   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:06.937305   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:06.951836   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:06.951866   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:07.024084   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:07.024101   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:07.024116   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:07.099871   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:07.099903   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:07.138822   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:07.138849   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:09.695000   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:09.708676   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:09.708754   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:09.747892   61496 cri.go:89] found id: ""
	I0401 21:02:09.747920   61496 logs.go:282] 0 containers: []
	W0401 21:02:09.747930   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:09.747937   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:09.748000   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:09.784142   61496 cri.go:89] found id: ""
	I0401 21:02:09.784174   61496 logs.go:282] 0 containers: []
	W0401 21:02:09.784184   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:09.784192   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:09.784252   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:09.821854   61496 cri.go:89] found id: ""
	I0401 21:02:09.821877   61496 logs.go:282] 0 containers: []
	W0401 21:02:09.821886   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:09.821893   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:09.821949   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:09.863502   61496 cri.go:89] found id: ""
	I0401 21:02:09.863531   61496 logs.go:282] 0 containers: []
	W0401 21:02:09.863549   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:09.863556   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:09.863617   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:09.902176   61496 cri.go:89] found id: ""
	I0401 21:02:09.902207   61496 logs.go:282] 0 containers: []
	W0401 21:02:09.902243   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:09.902251   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:09.902312   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:09.938561   61496 cri.go:89] found id: ""
	I0401 21:02:09.938593   61496 logs.go:282] 0 containers: []
	W0401 21:02:09.938601   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:09.938607   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:09.938652   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:09.973540   61496 cri.go:89] found id: ""
	I0401 21:02:09.973567   61496 logs.go:282] 0 containers: []
	W0401 21:02:09.973575   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:09.973587   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:09.973637   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:10.009927   61496 cri.go:89] found id: ""
	I0401 21:02:10.009948   61496 logs.go:282] 0 containers: []
	W0401 21:02:10.009975   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:10.009984   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:10.009999   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:10.050415   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:10.050452   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:10.105327   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:10.105362   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:10.119856   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:10.119885   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:10.193905   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:10.193929   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:10.193957   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:12.778424   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:12.792640   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:12.792722   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:12.830538   61496 cri.go:89] found id: ""
	I0401 21:02:12.830576   61496 logs.go:282] 0 containers: []
	W0401 21:02:12.830589   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:12.830597   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:12.830653   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:12.867750   61496 cri.go:89] found id: ""
	I0401 21:02:12.867774   61496 logs.go:282] 0 containers: []
	W0401 21:02:12.867784   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:12.867792   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:12.867850   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:12.901186   61496 cri.go:89] found id: ""
	I0401 21:02:12.901213   61496 logs.go:282] 0 containers: []
	W0401 21:02:12.901224   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:12.901232   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:12.901298   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:12.947259   61496 cri.go:89] found id: ""
	I0401 21:02:12.947286   61496 logs.go:282] 0 containers: []
	W0401 21:02:12.947295   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:12.947302   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:12.947359   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:12.989001   61496 cri.go:89] found id: ""
	I0401 21:02:12.989022   61496 logs.go:282] 0 containers: []
	W0401 21:02:12.989030   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:12.989035   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:12.989079   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:13.029591   61496 cri.go:89] found id: ""
	I0401 21:02:13.029621   61496 logs.go:282] 0 containers: []
	W0401 21:02:13.029632   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:13.029639   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:13.029698   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:13.064339   61496 cri.go:89] found id: ""
	I0401 21:02:13.064366   61496 logs.go:282] 0 containers: []
	W0401 21:02:13.064376   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:13.064383   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:13.064449   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:13.097483   61496 cri.go:89] found id: ""
	I0401 21:02:13.097516   61496 logs.go:282] 0 containers: []
	W0401 21:02:13.097528   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:13.097550   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:13.097566   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:13.139357   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:13.139390   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:13.190642   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:13.190678   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:13.204801   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:13.204827   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:13.276321   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:13.276349   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:13.276364   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:15.860642   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:15.875289   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:15.875417   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:15.914853   61496 cri.go:89] found id: ""
	I0401 21:02:15.914883   61496 logs.go:282] 0 containers: []
	W0401 21:02:15.914894   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:15.914901   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:15.914959   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:15.952437   61496 cri.go:89] found id: ""
	I0401 21:02:15.952469   61496 logs.go:282] 0 containers: []
	W0401 21:02:15.952480   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:15.952488   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:15.952545   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:15.992237   61496 cri.go:89] found id: ""
	I0401 21:02:15.992278   61496 logs.go:282] 0 containers: []
	W0401 21:02:15.992292   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:15.992304   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:15.992362   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:16.029122   61496 cri.go:89] found id: ""
	I0401 21:02:16.029144   61496 logs.go:282] 0 containers: []
	W0401 21:02:16.029152   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:16.029163   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:16.029205   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:16.064300   61496 cri.go:89] found id: ""
	I0401 21:02:16.064326   61496 logs.go:282] 0 containers: []
	W0401 21:02:16.064334   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:16.064339   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:16.064397   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:16.104471   61496 cri.go:89] found id: ""
	I0401 21:02:16.104507   61496 logs.go:282] 0 containers: []
	W0401 21:02:16.104526   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:16.104532   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:16.104591   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:16.139523   61496 cri.go:89] found id: ""
	I0401 21:02:16.139555   61496 logs.go:282] 0 containers: []
	W0401 21:02:16.139566   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:16.139579   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:16.139661   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:16.177488   61496 cri.go:89] found id: ""
	I0401 21:02:16.177519   61496 logs.go:282] 0 containers: []
	W0401 21:02:16.177530   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:16.177540   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:16.177552   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:16.231816   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:16.231850   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:16.246891   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:16.246920   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:16.319027   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:16.319054   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:16.319069   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:16.401444   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:16.401480   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:18.943555   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:18.958167   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:18.958245   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:18.993519   61496 cri.go:89] found id: ""
	I0401 21:02:18.993550   61496 logs.go:282] 0 containers: []
	W0401 21:02:18.993562   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:18.993569   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:18.993629   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:19.029371   61496 cri.go:89] found id: ""
	I0401 21:02:19.029396   61496 logs.go:282] 0 containers: []
	W0401 21:02:19.029405   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:19.029411   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:19.029456   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:19.065755   61496 cri.go:89] found id: ""
	I0401 21:02:19.065787   61496 logs.go:282] 0 containers: []
	W0401 21:02:19.065798   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:19.065810   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:19.065871   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:19.099897   61496 cri.go:89] found id: ""
	I0401 21:02:19.099922   61496 logs.go:282] 0 containers: []
	W0401 21:02:19.099929   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:19.099934   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:19.099978   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:19.136199   61496 cri.go:89] found id: ""
	I0401 21:02:19.136237   61496 logs.go:282] 0 containers: []
	W0401 21:02:19.136251   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:19.136259   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:19.136323   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:19.173993   61496 cri.go:89] found id: ""
	I0401 21:02:19.174019   61496 logs.go:282] 0 containers: []
	W0401 21:02:19.174028   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:19.174033   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:19.174080   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:19.209058   61496 cri.go:89] found id: ""
	I0401 21:02:19.209081   61496 logs.go:282] 0 containers: []
	W0401 21:02:19.209089   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:19.209095   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:19.209139   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:19.244308   61496 cri.go:89] found id: ""
	I0401 21:02:19.244347   61496 logs.go:282] 0 containers: []
	W0401 21:02:19.244365   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:19.244382   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:19.244396   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:19.313499   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:19.313523   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:19.313536   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:19.398033   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:19.398064   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:19.446902   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:19.446936   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:19.497480   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:19.497513   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:22.013621   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:22.026935   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:22.027010   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:22.066737   61496 cri.go:89] found id: ""
	I0401 21:02:22.066762   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.066774   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:22.066781   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:22.066842   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:22.100981   61496 cri.go:89] found id: ""
	I0401 21:02:22.101011   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.101021   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:22.101027   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:22.101086   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:22.138659   61496 cri.go:89] found id: ""
	I0401 21:02:22.138682   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.138689   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:22.138694   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:22.138751   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:22.175131   61496 cri.go:89] found id: ""
	I0401 21:02:22.175163   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.175175   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:22.175182   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:22.175241   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:22.210503   61496 cri.go:89] found id: ""
	I0401 21:02:22.210534   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.210545   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:22.210553   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:22.210608   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:22.246082   61496 cri.go:89] found id: ""
	I0401 21:02:22.246116   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.246130   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:22.246139   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:22.246200   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:22.281367   61496 cri.go:89] found id: ""
	I0401 21:02:22.281395   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.281408   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:22.281415   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:22.281478   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:22.317052   61496 cri.go:89] found id: ""
	I0401 21:02:22.317082   61496 logs.go:282] 0 containers: []
	W0401 21:02:22.317094   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:22.317106   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:22.317129   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:22.390990   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:22.391009   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:22.391020   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:22.477240   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:22.477278   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:22.517282   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:22.517306   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:22.568054   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:22.568090   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:25.082626   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:25.096175   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:25.096238   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:25.133808   61496 cri.go:89] found id: ""
	I0401 21:02:25.133844   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.133857   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:25.133866   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:25.133926   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:25.173595   61496 cri.go:89] found id: ""
	I0401 21:02:25.173627   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.173639   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:25.173646   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:25.173707   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:25.212222   61496 cri.go:89] found id: ""
	I0401 21:02:25.212251   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.212264   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:25.212271   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:25.212340   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:25.248765   61496 cri.go:89] found id: ""
	I0401 21:02:25.248810   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.248824   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:25.248841   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:25.248909   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:25.289306   61496 cri.go:89] found id: ""
	I0401 21:02:25.289332   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.289343   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:25.289351   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:25.289413   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:25.325085   61496 cri.go:89] found id: ""
	I0401 21:02:25.325110   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.325118   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:25.325123   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:25.325169   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:25.360489   61496 cri.go:89] found id: ""
	I0401 21:02:25.360511   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.360520   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:25.360526   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:25.360599   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:25.397033   61496 cri.go:89] found id: ""
	I0401 21:02:25.397057   61496 logs.go:282] 0 containers: []
	W0401 21:02:25.397065   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:25.397073   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:25.397087   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:25.436381   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:25.436404   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:25.488825   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:25.488862   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:25.504052   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:25.504076   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:25.579228   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:25.579254   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:25.579269   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:28.161333   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:28.174927   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:28.174988   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:28.211351   61496 cri.go:89] found id: ""
	I0401 21:02:28.211378   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.211389   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:28.211396   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:28.211462   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:28.251838   61496 cri.go:89] found id: ""
	I0401 21:02:28.251861   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.251868   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:28.251873   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:28.251920   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:28.285531   61496 cri.go:89] found id: ""
	I0401 21:02:28.285562   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.285573   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:28.285580   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:28.285638   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:28.322164   61496 cri.go:89] found id: ""
	I0401 21:02:28.322194   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.322205   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:28.322230   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:28.322294   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:28.356457   61496 cri.go:89] found id: ""
	I0401 21:02:28.356481   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.356488   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:28.356493   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:28.356554   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:28.392065   61496 cri.go:89] found id: ""
	I0401 21:02:28.392096   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.392107   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:28.392115   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:28.392175   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:28.427668   61496 cri.go:89] found id: ""
	I0401 21:02:28.427690   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.427697   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:28.427702   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:28.427744   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:28.466961   61496 cri.go:89] found id: ""
	I0401 21:02:28.466990   61496 logs.go:282] 0 containers: []
	W0401 21:02:28.467000   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:28.467012   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:28.467025   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:28.505160   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:28.505191   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:28.558754   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:28.558793   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:28.573155   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:28.573181   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:28.639871   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:28.639892   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:28.639906   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:31.226306   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:31.240119   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:31.240188   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:31.278784   61496 cri.go:89] found id: ""
	I0401 21:02:31.278809   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.278820   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:31.278827   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:31.278885   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:31.314453   61496 cri.go:89] found id: ""
	I0401 21:02:31.314477   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.314494   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:31.314499   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:31.314547   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:31.352486   61496 cri.go:89] found id: ""
	I0401 21:02:31.352520   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.352531   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:31.352538   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:31.352611   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:31.389444   61496 cri.go:89] found id: ""
	I0401 21:02:31.389470   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.389478   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:31.389484   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:31.389540   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:31.428477   61496 cri.go:89] found id: ""
	I0401 21:02:31.428508   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.428519   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:31.428526   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:31.428588   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:31.467616   61496 cri.go:89] found id: ""
	I0401 21:02:31.467652   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.467663   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:31.467671   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:31.467730   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:31.502390   61496 cri.go:89] found id: ""
	I0401 21:02:31.502420   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.502431   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:31.502438   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:31.502496   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:31.537229   61496 cri.go:89] found id: ""
	I0401 21:02:31.537257   61496 logs.go:282] 0 containers: []
	W0401 21:02:31.537268   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:31.537278   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:31.537291   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:31.589020   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:31.589059   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:31.603263   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:31.603289   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:31.676258   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:31.676277   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:31.676290   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:31.755073   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:31.755115   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:34.296808   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:34.310754   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:34.310816   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:34.349511   61496 cri.go:89] found id: ""
	I0401 21:02:34.349541   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.349552   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:34.349560   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:34.349615   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:34.397395   61496 cri.go:89] found id: ""
	I0401 21:02:34.397425   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.397436   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:34.397443   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:34.397511   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:34.443251   61496 cri.go:89] found id: ""
	I0401 21:02:34.443280   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.443291   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:34.443301   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:34.443395   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:34.482895   61496 cri.go:89] found id: ""
	I0401 21:02:34.482920   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.482931   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:34.482939   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:34.483000   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:34.522189   61496 cri.go:89] found id: ""
	I0401 21:02:34.522233   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.522244   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:34.522252   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:34.522310   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:34.562787   61496 cri.go:89] found id: ""
	I0401 21:02:34.562815   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.562826   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:34.562833   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:34.562890   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:34.597690   61496 cri.go:89] found id: ""
	I0401 21:02:34.597727   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.597742   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:34.597752   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:34.597817   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:34.653178   61496 cri.go:89] found id: ""
	I0401 21:02:34.653206   61496 logs.go:282] 0 containers: []
	W0401 21:02:34.653215   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:34.653226   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:34.653242   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:34.706800   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:34.706833   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:34.722175   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:34.722203   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:34.812093   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:34.812117   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:34.812131   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:34.889527   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:34.889566   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:37.432087   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:37.445200   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:37.445260   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:37.484599   61496 cri.go:89] found id: ""
	I0401 21:02:37.484626   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.484637   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:37.484644   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:37.484703   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:37.525357   61496 cri.go:89] found id: ""
	I0401 21:02:37.525385   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.525393   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:37.525397   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:37.525445   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:37.561776   61496 cri.go:89] found id: ""
	I0401 21:02:37.561799   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.561810   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:37.561816   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:37.561862   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:37.596706   61496 cri.go:89] found id: ""
	I0401 21:02:37.596739   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.596749   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:37.596755   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:37.596821   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:37.634902   61496 cri.go:89] found id: ""
	I0401 21:02:37.634928   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.634938   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:37.634945   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:37.635003   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:37.671662   61496 cri.go:89] found id: ""
	I0401 21:02:37.671688   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.671696   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:37.671702   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:37.671761   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:37.720214   61496 cri.go:89] found id: ""
	I0401 21:02:37.720246   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.720258   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:37.720265   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:37.720401   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:37.764215   61496 cri.go:89] found id: ""
	I0401 21:02:37.764240   61496 logs.go:282] 0 containers: []
	W0401 21:02:37.764250   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:37.764261   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:37.764277   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:37.818118   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:37.818152   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:37.834663   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:37.834696   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:37.937071   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:37.937096   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:37.937111   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:38.029371   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:38.029409   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:40.572501   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:40.586653   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:40.586730   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:40.627928   61496 cri.go:89] found id: ""
	I0401 21:02:40.628008   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.628025   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:40.628032   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:40.628085   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:40.672266   61496 cri.go:89] found id: ""
	I0401 21:02:40.672294   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.672303   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:40.672311   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:40.672366   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:40.708905   61496 cri.go:89] found id: ""
	I0401 21:02:40.708934   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.708946   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:40.708954   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:40.709013   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:40.746804   61496 cri.go:89] found id: ""
	I0401 21:02:40.746830   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.746839   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:40.746847   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:40.746902   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:40.791960   61496 cri.go:89] found id: ""
	I0401 21:02:40.791982   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.791996   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:40.792003   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:40.792058   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:40.830697   61496 cri.go:89] found id: ""
	I0401 21:02:40.830725   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.830745   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:40.830754   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:40.830820   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:40.868187   61496 cri.go:89] found id: ""
	I0401 21:02:40.868215   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.868225   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:40.868232   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:40.868279   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:40.906138   61496 cri.go:89] found id: ""
	I0401 21:02:40.906166   61496 logs.go:282] 0 containers: []
	W0401 21:02:40.906177   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:40.906188   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:40.906207   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:40.955091   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:40.955129   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:40.970276   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:40.970330   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:41.061356   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:41.061381   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:41.061394   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:41.145255   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:41.145294   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:43.688335   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:43.703591   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:43.703657   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:43.740085   61496 cri.go:89] found id: ""
	I0401 21:02:43.740108   61496 logs.go:282] 0 containers: []
	W0401 21:02:43.740118   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:43.740124   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:43.740181   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:43.777906   61496 cri.go:89] found id: ""
	I0401 21:02:43.777929   61496 logs.go:282] 0 containers: []
	W0401 21:02:43.777938   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:43.777944   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:43.778002   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:43.825471   61496 cri.go:89] found id: ""
	I0401 21:02:43.825505   61496 logs.go:282] 0 containers: []
	W0401 21:02:43.825516   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:43.825523   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:43.825591   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:43.865214   61496 cri.go:89] found id: ""
	I0401 21:02:43.865240   61496 logs.go:282] 0 containers: []
	W0401 21:02:43.865250   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:43.865263   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:43.865318   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:43.901032   61496 cri.go:89] found id: ""
	I0401 21:02:43.901064   61496 logs.go:282] 0 containers: []
	W0401 21:02:43.901077   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:43.901086   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:43.901151   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:43.937616   61496 cri.go:89] found id: ""
	I0401 21:02:43.937642   61496 logs.go:282] 0 containers: []
	W0401 21:02:43.937653   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:43.937660   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:43.937723   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:43.975917   61496 cri.go:89] found id: ""
	I0401 21:02:43.975944   61496 logs.go:282] 0 containers: []
	W0401 21:02:43.975954   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:43.975961   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:43.976021   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:44.017350   61496 cri.go:89] found id: ""
	I0401 21:02:44.017375   61496 logs.go:282] 0 containers: []
	W0401 21:02:44.017385   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:44.017396   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:44.017411   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:44.059817   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:44.059856   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:44.116636   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:44.116665   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:44.130804   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:44.130829   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:44.207246   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:44.207266   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:44.207280   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:46.791911   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:46.806735   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:46.806801   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:46.866427   61496 cri.go:89] found id: ""
	I0401 21:02:46.866454   61496 logs.go:282] 0 containers: []
	W0401 21:02:46.866464   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:46.866472   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:46.866541   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:46.910049   61496 cri.go:89] found id: ""
	I0401 21:02:46.910075   61496 logs.go:282] 0 containers: []
	W0401 21:02:46.910086   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:46.910092   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:46.910149   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:46.953100   61496 cri.go:89] found id: ""
	I0401 21:02:46.953128   61496 logs.go:282] 0 containers: []
	W0401 21:02:46.953139   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:46.953145   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:46.953204   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:46.989004   61496 cri.go:89] found id: ""
	I0401 21:02:46.989033   61496 logs.go:282] 0 containers: []
	W0401 21:02:46.989043   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:46.989050   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:46.989108   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:47.025252   61496 cri.go:89] found id: ""
	I0401 21:02:47.025281   61496 logs.go:282] 0 containers: []
	W0401 21:02:47.025292   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:47.025300   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:47.025364   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:47.063735   61496 cri.go:89] found id: ""
	I0401 21:02:47.063762   61496 logs.go:282] 0 containers: []
	W0401 21:02:47.063772   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:47.063789   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:47.063860   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:47.109417   61496 cri.go:89] found id: ""
	I0401 21:02:47.109458   61496 logs.go:282] 0 containers: []
	W0401 21:02:47.109469   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:47.109476   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:47.109551   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:47.146027   61496 cri.go:89] found id: ""
	I0401 21:02:47.146056   61496 logs.go:282] 0 containers: []
	W0401 21:02:47.146067   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:47.146077   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:47.146092   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:47.211362   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:47.211397   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:47.228324   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:47.228351   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:47.306036   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:47.306055   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:47.306069   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:47.382371   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:47.382404   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:49.926348   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:49.945384   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:49.945449   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:49.988841   61496 cri.go:89] found id: ""
	I0401 21:02:49.988862   61496 logs.go:282] 0 containers: []
	W0401 21:02:49.988870   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:49.988876   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:49.988944   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:50.030594   61496 cri.go:89] found id: ""
	I0401 21:02:50.030626   61496 logs.go:282] 0 containers: []
	W0401 21:02:50.030637   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:50.030650   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:50.030714   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:50.069072   61496 cri.go:89] found id: ""
	I0401 21:02:50.069096   61496 logs.go:282] 0 containers: []
	W0401 21:02:50.069113   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:50.069120   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:50.069186   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:50.116905   61496 cri.go:89] found id: ""
	I0401 21:02:50.116927   61496 logs.go:282] 0 containers: []
	W0401 21:02:50.116935   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:50.116940   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:50.116998   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:50.161294   61496 cri.go:89] found id: ""
	I0401 21:02:50.161325   61496 logs.go:282] 0 containers: []
	W0401 21:02:50.161335   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:50.161343   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:50.161407   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:50.204886   61496 cri.go:89] found id: ""
	I0401 21:02:50.204913   61496 logs.go:282] 0 containers: []
	W0401 21:02:50.204924   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:50.204933   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:50.204995   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:50.242541   61496 cri.go:89] found id: ""
	I0401 21:02:50.242570   61496 logs.go:282] 0 containers: []
	W0401 21:02:50.242579   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:50.242586   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:50.242648   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:50.281683   61496 cri.go:89] found id: ""
	I0401 21:02:50.281715   61496 logs.go:282] 0 containers: []
	W0401 21:02:50.281727   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:50.281738   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:50.281752   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:50.364656   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:50.364684   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:50.364701   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:50.458941   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:50.458983   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:50.501978   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:50.502010   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:50.576396   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:50.576429   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:53.093649   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:53.108269   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:53.108346   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:53.148897   61496 cri.go:89] found id: ""
	I0401 21:02:53.148925   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.148936   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:53.148943   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:53.149006   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:53.186292   61496 cri.go:89] found id: ""
	I0401 21:02:53.186314   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.186322   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:53.186326   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:53.186376   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:53.225772   61496 cri.go:89] found id: ""
	I0401 21:02:53.225798   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.225819   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:53.225827   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:53.225885   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:53.271070   61496 cri.go:89] found id: ""
	I0401 21:02:53.271096   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.271107   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:53.271114   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:53.271171   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:53.311234   61496 cri.go:89] found id: ""
	I0401 21:02:53.311257   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.311265   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:53.311271   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:53.311315   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:53.351602   61496 cri.go:89] found id: ""
	I0401 21:02:53.351629   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.351639   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:53.351645   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:53.351692   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:53.396308   61496 cri.go:89] found id: ""
	I0401 21:02:53.396337   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.396347   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:53.396356   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:53.396423   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:53.436049   61496 cri.go:89] found id: ""
	I0401 21:02:53.436076   61496 logs.go:282] 0 containers: []
	W0401 21:02:53.436089   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:53.436102   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:53.436112   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:53.491126   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:53.491162   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:53.505877   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:53.505908   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:53.576271   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:53.576289   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:53.576307   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:53.664319   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:53.664356   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:56.209131   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:56.223778   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:56.223862   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:56.260281   61496 cri.go:89] found id: ""
	I0401 21:02:56.260308   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.260319   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:56.260327   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:56.260415   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:56.304427   61496 cri.go:89] found id: ""
	I0401 21:02:56.304459   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.304470   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:56.304476   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:56.304545   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:56.348392   61496 cri.go:89] found id: ""
	I0401 21:02:56.348416   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.348425   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:56.348431   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:56.348486   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:56.394884   61496 cri.go:89] found id: ""
	I0401 21:02:56.394911   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.394922   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:56.394929   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:56.394987   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:56.432938   61496 cri.go:89] found id: ""
	I0401 21:02:56.432962   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.432969   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:56.432975   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:56.433029   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:56.479186   61496 cri.go:89] found id: ""
	I0401 21:02:56.479212   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.479222   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:56.479230   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:56.479293   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:56.513608   61496 cri.go:89] found id: ""
	I0401 21:02:56.513634   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.513645   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:56.513655   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:56.513715   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:56.554485   61496 cri.go:89] found id: ""
	I0401 21:02:56.554516   61496 logs.go:282] 0 containers: []
	W0401 21:02:56.554526   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:56.554536   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:56.554550   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:56.624677   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:56.624702   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:56.624716   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:56.708566   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:56.708614   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:02:56.752327   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:56.752366   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:56.807678   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:56.807764   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:59.332882   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:02:59.351035   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:02:59.351099   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:02:59.406279   61496 cri.go:89] found id: ""
	I0401 21:02:59.406314   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.406325   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:02:59.406333   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:02:59.406392   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:02:59.455221   61496 cri.go:89] found id: ""
	I0401 21:02:59.455249   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.455259   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:02:59.455266   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:02:59.455326   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:02:59.502145   61496 cri.go:89] found id: ""
	I0401 21:02:59.502172   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.502179   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:02:59.502185   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:02:59.502265   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:02:59.545494   61496 cri.go:89] found id: ""
	I0401 21:02:59.545529   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.545541   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:02:59.545550   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:02:59.545678   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:02:59.589350   61496 cri.go:89] found id: ""
	I0401 21:02:59.589373   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.589382   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:02:59.589389   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:02:59.589455   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:02:59.635146   61496 cri.go:89] found id: ""
	I0401 21:02:59.635176   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.635187   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:02:59.635195   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:02:59.635258   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:02:59.676255   61496 cri.go:89] found id: ""
	I0401 21:02:59.676280   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.676290   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:02:59.676299   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:02:59.676361   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:02:59.727931   61496 cri.go:89] found id: ""
	I0401 21:02:59.727961   61496 logs.go:282] 0 containers: []
	W0401 21:02:59.727972   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:02:59.727990   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:02:59.728006   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:02:59.786430   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:02:59.786467   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:02:59.801964   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:02:59.801994   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:02:59.896131   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:02:59.896162   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:02:59.896179   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:02:59.982103   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:02:59.982136   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:02.530342   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:02.544518   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:02.544593   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:02.591443   61496 cri.go:89] found id: ""
	I0401 21:03:02.591503   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.591518   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:02.591529   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:02.591596   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:02.629775   61496 cri.go:89] found id: ""
	I0401 21:03:02.629809   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.629816   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:02.629824   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:02.629875   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:02.674465   61496 cri.go:89] found id: ""
	I0401 21:03:02.674498   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.674510   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:02.674518   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:02.674578   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:02.714725   61496 cri.go:89] found id: ""
	I0401 21:03:02.714754   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.714764   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:02.714771   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:02.714832   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:02.756727   61496 cri.go:89] found id: ""
	I0401 21:03:02.756753   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.756764   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:02.756771   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:02.756834   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:02.795219   61496 cri.go:89] found id: ""
	I0401 21:03:02.795245   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.795256   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:02.795269   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:02.795314   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:02.832427   61496 cri.go:89] found id: ""
	I0401 21:03:02.832456   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.832465   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:02.832475   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:02.832536   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:02.872371   61496 cri.go:89] found id: ""
	I0401 21:03:02.872397   61496 logs.go:282] 0 containers: []
	W0401 21:03:02.872407   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:02.872416   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:02.872429   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:02.928503   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:02.928537   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:02.944301   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:02.944337   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:03.025912   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:03.025939   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:03.025954   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:03.125128   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:03.125183   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:05.666370   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:05.679745   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:05.679812   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:05.724474   61496 cri.go:89] found id: ""
	I0401 21:03:05.724501   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.724511   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:05.724517   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:05.724592   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:05.758404   61496 cri.go:89] found id: ""
	I0401 21:03:05.758437   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.758449   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:05.758455   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:05.758515   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:05.793530   61496 cri.go:89] found id: ""
	I0401 21:03:05.793560   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.793571   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:05.793579   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:05.793641   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:05.834990   61496 cri.go:89] found id: ""
	I0401 21:03:05.835019   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.835027   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:05.835033   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:05.835092   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:05.867508   61496 cri.go:89] found id: ""
	I0401 21:03:05.867538   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.867549   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:05.867555   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:05.867618   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:05.901050   61496 cri.go:89] found id: ""
	I0401 21:03:05.901074   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.901084   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:05.901090   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:05.901168   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:05.936810   61496 cri.go:89] found id: ""
	I0401 21:03:05.936846   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.936857   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:05.936863   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:05.936919   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:05.973198   61496 cri.go:89] found id: ""
	I0401 21:03:05.973228   61496 logs.go:282] 0 containers: []
	W0401 21:03:05.973239   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:05.973248   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:05.973259   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:06.026382   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:06.026416   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:06.040299   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:06.040328   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:06.111931   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:06.111958   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:06.111974   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:06.193132   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:06.193167   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:08.733430   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:08.747562   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:08.747619   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:08.787658   61496 cri.go:89] found id: ""
	I0401 21:03:08.787683   61496 logs.go:282] 0 containers: []
	W0401 21:03:08.787691   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:08.787696   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:08.787739   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:08.823498   61496 cri.go:89] found id: ""
	I0401 21:03:08.823524   61496 logs.go:282] 0 containers: []
	W0401 21:03:08.823532   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:08.823537   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:08.823598   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:08.860117   61496 cri.go:89] found id: ""
	I0401 21:03:08.860140   61496 logs.go:282] 0 containers: []
	W0401 21:03:08.860149   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:08.860157   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:08.860213   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:08.896702   61496 cri.go:89] found id: ""
	I0401 21:03:08.896739   61496 logs.go:282] 0 containers: []
	W0401 21:03:08.896747   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:08.896752   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:08.896810   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:08.938752   61496 cri.go:89] found id: ""
	I0401 21:03:08.938776   61496 logs.go:282] 0 containers: []
	W0401 21:03:08.938784   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:08.938790   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:08.938854   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:08.973692   61496 cri.go:89] found id: ""
	I0401 21:03:08.973715   61496 logs.go:282] 0 containers: []
	W0401 21:03:08.973725   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:08.973732   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:08.973794   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:09.009002   61496 cri.go:89] found id: ""
	I0401 21:03:09.009032   61496 logs.go:282] 0 containers: []
	W0401 21:03:09.009043   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:09.009050   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:09.009107   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:09.044574   61496 cri.go:89] found id: ""
	I0401 21:03:09.044602   61496 logs.go:282] 0 containers: []
	W0401 21:03:09.044610   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:09.044618   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:09.044629   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:09.099148   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:09.099177   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:09.113100   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:09.113140   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:09.191217   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:09.191241   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:09.191256   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:09.267122   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:09.267157   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:11.810323   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:11.825022   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:11.825111   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:11.863799   61496 cri.go:89] found id: ""
	I0401 21:03:11.863832   61496 logs.go:282] 0 containers: []
	W0401 21:03:11.863843   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:11.863853   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:11.863915   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:11.905293   61496 cri.go:89] found id: ""
	I0401 21:03:11.905318   61496 logs.go:282] 0 containers: []
	W0401 21:03:11.905335   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:11.905341   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:11.905393   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:11.960649   61496 cri.go:89] found id: ""
	I0401 21:03:11.960677   61496 logs.go:282] 0 containers: []
	W0401 21:03:11.960690   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:11.960698   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:11.960759   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:12.001750   61496 cri.go:89] found id: ""
	I0401 21:03:12.001779   61496 logs.go:282] 0 containers: []
	W0401 21:03:12.001796   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:12.001804   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:12.001857   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:12.042601   61496 cri.go:89] found id: ""
	I0401 21:03:12.042631   61496 logs.go:282] 0 containers: []
	W0401 21:03:12.042641   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:12.042649   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:12.042708   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:12.081659   61496 cri.go:89] found id: ""
	I0401 21:03:12.081683   61496 logs.go:282] 0 containers: []
	W0401 21:03:12.081691   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:12.081696   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:12.081748   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:12.120173   61496 cri.go:89] found id: ""
	I0401 21:03:12.120219   61496 logs.go:282] 0 containers: []
	W0401 21:03:12.120229   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:12.120236   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:12.120293   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:12.169284   61496 cri.go:89] found id: ""
	I0401 21:03:12.169310   61496 logs.go:282] 0 containers: []
	W0401 21:03:12.169320   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:12.169330   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:12.169342   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:12.225024   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:12.225054   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:12.240153   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:12.240179   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:12.314340   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:12.314362   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:12.314379   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:12.394824   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:12.394856   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:14.941739   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:14.958161   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:14.958249   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:15.003484   61496 cri.go:89] found id: ""
	I0401 21:03:15.003517   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.003527   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:15.003534   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:15.003587   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:15.052945   61496 cri.go:89] found id: ""
	I0401 21:03:15.053039   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.053050   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:15.053058   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:15.053127   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:15.103302   61496 cri.go:89] found id: ""
	I0401 21:03:15.103330   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.103341   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:15.103349   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:15.103408   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:15.148470   61496 cri.go:89] found id: ""
	I0401 21:03:15.148495   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.148504   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:15.148512   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:15.148568   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:15.198242   61496 cri.go:89] found id: ""
	I0401 21:03:15.198265   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.198275   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:15.198282   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:15.198339   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:15.253792   61496 cri.go:89] found id: ""
	I0401 21:03:15.253820   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.253830   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:15.253838   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:15.253896   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:15.310312   61496 cri.go:89] found id: ""
	I0401 21:03:15.310340   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.310351   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:15.310358   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:15.310416   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:15.356998   61496 cri.go:89] found id: ""
	I0401 21:03:15.357024   61496 logs.go:282] 0 containers: []
	W0401 21:03:15.357034   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:15.357045   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:15.357064   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:15.428490   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:15.428524   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:15.507433   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:15.507478   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:15.530645   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:15.530682   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:15.652064   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:15.652090   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:15.652108   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:18.232847   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:18.247715   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:18.247790   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:18.301976   61496 cri.go:89] found id: ""
	I0401 21:03:18.302003   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.302013   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:18.302021   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:18.302068   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:18.346745   61496 cri.go:89] found id: ""
	I0401 21:03:18.346771   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.346788   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:18.346796   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:18.346870   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:18.387922   61496 cri.go:89] found id: ""
	I0401 21:03:18.387951   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.387962   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:18.387970   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:18.388029   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:18.433831   61496 cri.go:89] found id: ""
	I0401 21:03:18.433859   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.433870   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:18.433876   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:18.433933   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:18.482802   61496 cri.go:89] found id: ""
	I0401 21:03:18.482835   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.482845   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:18.482853   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:18.482910   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:18.528156   61496 cri.go:89] found id: ""
	I0401 21:03:18.528183   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.528193   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:18.528200   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:18.528258   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:18.568028   61496 cri.go:89] found id: ""
	I0401 21:03:18.568065   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.568076   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:18.568084   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:18.568144   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:18.608567   61496 cri.go:89] found id: ""
	I0401 21:03:18.608596   61496 logs.go:282] 0 containers: []
	W0401 21:03:18.608606   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:18.608616   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:18.608630   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:18.665678   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:18.665718   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:18.681589   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:18.681618   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:18.761952   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:18.761978   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:18.761994   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:18.857181   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:18.857223   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:21.407652   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:21.422163   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:21.422243   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:21.464989   61496 cri.go:89] found id: ""
	I0401 21:03:21.465015   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.465025   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:21.465031   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:21.465088   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:21.503176   61496 cri.go:89] found id: ""
	I0401 21:03:21.503203   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.503213   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:21.503221   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:21.503280   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:21.540905   61496 cri.go:89] found id: ""
	I0401 21:03:21.540932   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.540941   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:21.540949   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:21.541010   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:21.575608   61496 cri.go:89] found id: ""
	I0401 21:03:21.575633   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.575643   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:21.575650   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:21.575708   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:21.614640   61496 cri.go:89] found id: ""
	I0401 21:03:21.614663   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.614673   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:21.614681   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:21.614747   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:21.659208   61496 cri.go:89] found id: ""
	I0401 21:03:21.659230   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.659237   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:21.659242   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:21.659285   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:21.695469   61496 cri.go:89] found id: ""
	I0401 21:03:21.695500   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.695508   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:21.695512   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:21.695571   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:21.737093   61496 cri.go:89] found id: ""
	I0401 21:03:21.737116   61496 logs.go:282] 0 containers: []
	W0401 21:03:21.737127   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:21.737136   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:21.737149   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:21.753053   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:21.753081   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:21.829700   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:21.829730   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:21.829746   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:21.914517   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:21.914549   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:21.967992   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:21.968087   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:24.556008   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:24.587694   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:24.587747   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:24.635924   61496 cri.go:89] found id: ""
	I0401 21:03:24.635956   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.635968   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:24.635975   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:24.636035   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:24.681681   61496 cri.go:89] found id: ""
	I0401 21:03:24.681708   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.681715   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:24.681721   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:24.681778   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:24.723880   61496 cri.go:89] found id: ""
	I0401 21:03:24.723915   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.723926   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:24.723934   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:24.724000   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:24.776238   61496 cri.go:89] found id: ""
	I0401 21:03:24.776264   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.776280   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:24.776287   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:24.776347   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:24.825317   61496 cri.go:89] found id: ""
	I0401 21:03:24.825343   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.825353   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:24.825360   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:24.825416   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:24.873738   61496 cri.go:89] found id: ""
	I0401 21:03:24.873764   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.873783   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:24.873791   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:24.873846   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:24.922969   61496 cri.go:89] found id: ""
	I0401 21:03:24.922996   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.923006   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:24.923013   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:24.923070   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:24.978515   61496 cri.go:89] found id: ""
	I0401 21:03:24.978558   61496 logs.go:282] 0 containers: []
	W0401 21:03:24.978568   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:24.978578   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:24.978593   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:25.034817   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:25.034845   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:25.048679   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:25.048712   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:25.135744   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:25.135763   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:25.135775   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:25.248413   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:25.248476   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:27.809011   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:27.833881   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:27.833939   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:27.918438   61496 cri.go:89] found id: ""
	I0401 21:03:27.918463   61496 logs.go:282] 0 containers: []
	W0401 21:03:27.918474   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:27.918481   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:27.918537   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:27.968436   61496 cri.go:89] found id: ""
	I0401 21:03:27.968464   61496 logs.go:282] 0 containers: []
	W0401 21:03:27.968476   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:27.968483   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:27.968547   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:28.015665   61496 cri.go:89] found id: ""
	I0401 21:03:28.015689   61496 logs.go:282] 0 containers: []
	W0401 21:03:28.015700   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:28.015707   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:28.015760   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:28.063495   61496 cri.go:89] found id: ""
	I0401 21:03:28.063531   61496 logs.go:282] 0 containers: []
	W0401 21:03:28.063542   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:28.063550   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:28.063598   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:28.114785   61496 cri.go:89] found id: ""
	I0401 21:03:28.114810   61496 logs.go:282] 0 containers: []
	W0401 21:03:28.114821   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:28.114835   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:28.114899   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:28.158846   61496 cri.go:89] found id: ""
	I0401 21:03:28.158871   61496 logs.go:282] 0 containers: []
	W0401 21:03:28.158881   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:28.158888   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:28.158965   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:28.200210   61496 cri.go:89] found id: ""
	I0401 21:03:28.200238   61496 logs.go:282] 0 containers: []
	W0401 21:03:28.200246   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:28.200253   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:28.200331   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:28.238330   61496 cri.go:89] found id: ""
	I0401 21:03:28.238360   61496 logs.go:282] 0 containers: []
	W0401 21:03:28.238371   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:28.238382   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:28.238397   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:28.310603   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:28.310631   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:28.326282   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:28.326317   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:28.424586   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:28.424618   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:28.424633   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:28.543192   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:28.543240   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:31.096390   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:31.110542   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:31.110617   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:31.149782   61496 cri.go:89] found id: ""
	I0401 21:03:31.149815   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.149827   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:31.149834   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:31.149885   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:31.185322   61496 cri.go:89] found id: ""
	I0401 21:03:31.185349   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.185360   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:31.185367   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:31.185416   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:31.227106   61496 cri.go:89] found id: ""
	I0401 21:03:31.227133   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.227141   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:31.227146   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:31.227192   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:31.261717   61496 cri.go:89] found id: ""
	I0401 21:03:31.261739   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.261745   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:31.261750   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:31.261797   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:31.297594   61496 cri.go:89] found id: ""
	I0401 21:03:31.297627   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.297638   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:31.297648   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:31.297705   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:31.338407   61496 cri.go:89] found id: ""
	I0401 21:03:31.338438   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.338449   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:31.338460   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:31.338528   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:31.379193   61496 cri.go:89] found id: ""
	I0401 21:03:31.379223   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.379233   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:31.379240   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:31.379297   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:31.417573   61496 cri.go:89] found id: ""
	I0401 21:03:31.417594   61496 logs.go:282] 0 containers: []
	W0401 21:03:31.417601   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:31.417608   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:31.417619   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:31.478746   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:31.478784   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:31.493844   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:31.493880   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:31.580427   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:31.580454   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:31.580468   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:31.689445   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:31.689479   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:34.240132   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:34.254405   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:34.254472   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:34.292427   61496 cri.go:89] found id: ""
	I0401 21:03:34.292455   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.292466   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:34.292474   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:34.292532   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:34.333910   61496 cri.go:89] found id: ""
	I0401 21:03:34.333930   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.333937   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:34.333942   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:34.333990   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:34.376795   61496 cri.go:89] found id: ""
	I0401 21:03:34.376821   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.376832   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:34.376838   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:34.376893   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:34.416718   61496 cri.go:89] found id: ""
	I0401 21:03:34.416747   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.416759   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:34.416768   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:34.416827   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:34.453207   61496 cri.go:89] found id: ""
	I0401 21:03:34.453235   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.453256   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:34.453264   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:34.453321   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:34.490394   61496 cri.go:89] found id: ""
	I0401 21:03:34.490429   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.490439   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:34.490447   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:34.490502   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:34.527361   61496 cri.go:89] found id: ""
	I0401 21:03:34.527391   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.527401   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:34.527408   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:34.527470   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:34.567288   61496 cri.go:89] found id: ""
	I0401 21:03:34.567311   61496 logs.go:282] 0 containers: []
	W0401 21:03:34.567318   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:34.567325   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:34.567335   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:34.629550   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:34.629586   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:34.644888   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:34.644927   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:34.731863   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:34.731886   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:34.731901   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:34.827718   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:34.827754   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:37.373159   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:37.388944   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:37.389022   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:37.430773   61496 cri.go:89] found id: ""
	I0401 21:03:37.430800   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.430810   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:37.430818   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:37.430872   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:37.472714   61496 cri.go:89] found id: ""
	I0401 21:03:37.472748   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.472765   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:37.472773   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:37.472830   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:37.521695   61496 cri.go:89] found id: ""
	I0401 21:03:37.521719   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.521728   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:37.521733   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:37.521793   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:37.558804   61496 cri.go:89] found id: ""
	I0401 21:03:37.558840   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.558847   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:37.558853   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:37.558910   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:37.594439   61496 cri.go:89] found id: ""
	I0401 21:03:37.594468   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.594479   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:37.594487   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:37.594545   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:37.630792   61496 cri.go:89] found id: ""
	I0401 21:03:37.630818   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.630828   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:37.630835   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:37.630897   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:37.666922   61496 cri.go:89] found id: ""
	I0401 21:03:37.666949   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.666960   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:37.666968   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:37.667027   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:37.702327   61496 cri.go:89] found id: ""
	I0401 21:03:37.702361   61496 logs.go:282] 0 containers: []
	W0401 21:03:37.702375   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:37.702386   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:37.702401   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:37.769450   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:37.769485   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:37.784804   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:37.784840   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:37.877696   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:37.877716   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:37.877730   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:37.964783   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:37.964810   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:40.512550   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:40.527190   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:40.527248   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:40.573017   61496 cri.go:89] found id: ""
	I0401 21:03:40.573041   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.573053   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:40.573060   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:40.573114   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:40.614844   61496 cri.go:89] found id: ""
	I0401 21:03:40.614880   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.614890   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:40.614897   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:40.614956   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:40.652646   61496 cri.go:89] found id: ""
	I0401 21:03:40.652677   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.652687   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:40.652694   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:40.652764   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:40.697361   61496 cri.go:89] found id: ""
	I0401 21:03:40.697390   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.697401   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:40.697408   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:40.697468   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:40.737269   61496 cri.go:89] found id: ""
	I0401 21:03:40.737295   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.737306   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:40.737313   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:40.737379   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:40.784613   61496 cri.go:89] found id: ""
	I0401 21:03:40.784643   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.784653   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:40.784660   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:40.784722   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:40.828923   61496 cri.go:89] found id: ""
	I0401 21:03:40.828953   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.828961   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:40.828966   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:40.829020   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:40.870677   61496 cri.go:89] found id: ""
	I0401 21:03:40.870708   61496 logs.go:282] 0 containers: []
	W0401 21:03:40.870718   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:40.870727   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:40.870741   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:40.889729   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:40.889758   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:40.980901   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:40.980925   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:40.980941   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:41.100463   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:41.100511   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:41.141848   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:41.141882   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:43.717375   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:43.738546   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:43.738643   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:43.809014   61496 cri.go:89] found id: ""
	I0401 21:03:43.809043   61496 logs.go:282] 0 containers: []
	W0401 21:03:43.809054   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:43.809061   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:43.809117   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:43.849923   61496 cri.go:89] found id: ""
	I0401 21:03:43.849952   61496 logs.go:282] 0 containers: []
	W0401 21:03:43.849963   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:43.849970   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:43.850030   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:43.895728   61496 cri.go:89] found id: ""
	I0401 21:03:43.895758   61496 logs.go:282] 0 containers: []
	W0401 21:03:43.895769   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:43.895776   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:43.895844   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:43.937391   61496 cri.go:89] found id: ""
	I0401 21:03:43.937418   61496 logs.go:282] 0 containers: []
	W0401 21:03:43.937428   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:43.937435   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:43.937495   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:43.989083   61496 cri.go:89] found id: ""
	I0401 21:03:43.989110   61496 logs.go:282] 0 containers: []
	W0401 21:03:43.989134   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:43.989142   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:43.989208   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:44.034771   61496 cri.go:89] found id: ""
	I0401 21:03:44.034799   61496 logs.go:282] 0 containers: []
	W0401 21:03:44.034808   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:44.034814   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:44.034872   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:44.080223   61496 cri.go:89] found id: ""
	I0401 21:03:44.080249   61496 logs.go:282] 0 containers: []
	W0401 21:03:44.080257   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:44.080264   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:44.080327   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:44.118332   61496 cri.go:89] found id: ""
	I0401 21:03:44.118359   61496 logs.go:282] 0 containers: []
	W0401 21:03:44.118371   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:44.118382   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:44.118395   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:44.168261   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:44.168294   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:44.230525   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:44.230566   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:44.245949   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:44.245985   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:44.346351   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:44.346375   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:44.346391   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:46.962363   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:46.982834   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:46.982908   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:47.026789   61496 cri.go:89] found id: ""
	I0401 21:03:47.026817   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.026827   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:47.026835   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:47.026899   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:47.068766   61496 cri.go:89] found id: ""
	I0401 21:03:47.068807   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.068819   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:47.068827   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:47.068889   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:47.115973   61496 cri.go:89] found id: ""
	I0401 21:03:47.116001   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.116012   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:47.116018   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:47.116075   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:47.164420   61496 cri.go:89] found id: ""
	I0401 21:03:47.164442   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.164459   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:47.164467   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:47.164529   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:47.211258   61496 cri.go:89] found id: ""
	I0401 21:03:47.211292   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.211302   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:47.211309   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:47.211398   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:47.260565   61496 cri.go:89] found id: ""
	I0401 21:03:47.260615   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.260625   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:47.260631   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:47.260686   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:47.299841   61496 cri.go:89] found id: ""
	I0401 21:03:47.299863   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.299872   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:47.299876   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:47.299929   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:47.342237   61496 cri.go:89] found id: ""
	I0401 21:03:47.342266   61496 logs.go:282] 0 containers: []
	W0401 21:03:47.342277   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:47.342287   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:47.342300   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:47.401482   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:47.401523   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:47.420914   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:47.420945   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:47.509453   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:47.509481   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:47.509496   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:47.605825   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:47.605864   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:50.160044   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:50.175286   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:50.175360   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:50.216747   61496 cri.go:89] found id: ""
	I0401 21:03:50.216781   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.216794   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:50.216802   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:50.216862   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:50.256292   61496 cri.go:89] found id: ""
	I0401 21:03:50.256326   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.256336   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:50.256343   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:50.256400   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:50.300038   61496 cri.go:89] found id: ""
	I0401 21:03:50.300065   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.300074   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:50.300081   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:50.300139   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:50.342875   61496 cri.go:89] found id: ""
	I0401 21:03:50.342900   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.342912   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:50.342919   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:50.342975   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:50.382901   61496 cri.go:89] found id: ""
	I0401 21:03:50.382936   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.382947   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:50.382980   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:50.383066   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:50.420578   61496 cri.go:89] found id: ""
	I0401 21:03:50.420606   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.420617   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:50.420625   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:50.420685   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:50.462795   61496 cri.go:89] found id: ""
	I0401 21:03:50.462828   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.462837   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:50.462843   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:50.462902   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:50.513170   61496 cri.go:89] found id: ""
	I0401 21:03:50.513200   61496 logs.go:282] 0 containers: []
	W0401 21:03:50.513212   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:50.513223   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:50.513236   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:50.601470   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:50.601507   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:50.656427   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:50.656456   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:50.756415   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:50.756452   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:50.774036   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:50.774065   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:50.853827   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:53.354542   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:53.374354   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:53.374418   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:53.480558   61496 cri.go:89] found id: ""
	I0401 21:03:53.480583   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.480593   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:53.480601   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:53.480653   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:53.563791   61496 cri.go:89] found id: ""
	I0401 21:03:53.563816   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.563828   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:53.563835   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:53.563890   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:53.614219   61496 cri.go:89] found id: ""
	I0401 21:03:53.614244   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.614254   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:53.614261   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:53.614316   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:53.669435   61496 cri.go:89] found id: ""
	I0401 21:03:53.669459   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.669469   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:53.669476   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:53.669540   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:53.724343   61496 cri.go:89] found id: ""
	I0401 21:03:53.724368   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.724378   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:53.724385   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:53.724443   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:53.780151   61496 cri.go:89] found id: ""
	I0401 21:03:53.780176   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.780187   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:53.780202   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:53.780256   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:53.837541   61496 cri.go:89] found id: ""
	I0401 21:03:53.837565   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.837576   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:53.837583   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:53.837643   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:53.896770   61496 cri.go:89] found id: ""
	I0401 21:03:53.896797   61496 logs.go:282] 0 containers: []
	W0401 21:03:53.896813   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:53.896822   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:53.896836   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:03:54.040208   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:54.040319   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:54.137824   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:54.137874   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:54.222656   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:54.222690   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:54.240376   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:54.240412   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:54.354460   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:56.854672   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:03:56.876558   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:03:56.876637   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:03:56.930192   61496 cri.go:89] found id: ""
	I0401 21:03:56.930247   61496 logs.go:282] 0 containers: []
	W0401 21:03:56.930260   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:03:56.930270   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:03:56.930330   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:03:56.979887   61496 cri.go:89] found id: ""
	I0401 21:03:56.979910   61496 logs.go:282] 0 containers: []
	W0401 21:03:56.979919   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:03:56.979925   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:03:56.979981   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:03:57.030068   61496 cri.go:89] found id: ""
	I0401 21:03:57.030099   61496 logs.go:282] 0 containers: []
	W0401 21:03:57.030109   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:03:57.030116   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:03:57.030169   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:03:57.071885   61496 cri.go:89] found id: ""
	I0401 21:03:57.071916   61496 logs.go:282] 0 containers: []
	W0401 21:03:57.071927   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:03:57.071935   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:03:57.072001   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:03:57.114684   61496 cri.go:89] found id: ""
	I0401 21:03:57.114728   61496 logs.go:282] 0 containers: []
	W0401 21:03:57.114748   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:03:57.114757   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:03:57.114823   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:03:57.160746   61496 cri.go:89] found id: ""
	I0401 21:03:57.160773   61496 logs.go:282] 0 containers: []
	W0401 21:03:57.160783   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:03:57.160791   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:03:57.160843   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:03:57.203924   61496 cri.go:89] found id: ""
	I0401 21:03:57.203950   61496 logs.go:282] 0 containers: []
	W0401 21:03:57.203961   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:03:57.203968   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:03:57.204019   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:03:57.249348   61496 cri.go:89] found id: ""
	I0401 21:03:57.249446   61496 logs.go:282] 0 containers: []
	W0401 21:03:57.249469   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:03:57.249485   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:03:57.249506   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:03:57.322850   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:03:57.322883   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:03:57.401658   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:03:57.401705   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:03:57.420701   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:03:57.420729   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:03:57.509900   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:03:57.509923   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:03:57.509938   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:00.123554   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:00.139721   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:00.139794   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:00.182767   61496 cri.go:89] found id: ""
	I0401 21:04:00.182805   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.182816   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:00.182828   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:00.182889   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:00.233433   61496 cri.go:89] found id: ""
	I0401 21:04:00.233460   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.233471   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:00.233478   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:00.233536   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:00.290182   61496 cri.go:89] found id: ""
	I0401 21:04:00.290232   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.290246   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:00.290254   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:00.290315   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:00.343558   61496 cri.go:89] found id: ""
	I0401 21:04:00.343586   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.343597   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:00.343604   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:00.343664   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:00.392808   61496 cri.go:89] found id: ""
	I0401 21:04:00.392835   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.392846   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:00.392853   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:00.392911   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:00.436410   61496 cri.go:89] found id: ""
	I0401 21:04:00.436442   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.436454   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:00.436462   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:00.436539   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:00.486200   61496 cri.go:89] found id: ""
	I0401 21:04:00.486250   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.486262   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:00.486269   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:00.486327   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:00.527920   61496 cri.go:89] found id: ""
	I0401 21:04:00.527948   61496 logs.go:282] 0 containers: []
	W0401 21:04:00.527959   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:00.527970   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:00.527985   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:00.582331   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:00.582368   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:00.601804   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:00.601845   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:00.693407   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:00.693429   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:00.693443   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:00.784882   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:00.784919   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:03.336406   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:03.360522   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:03.360601   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:03.430571   61496 cri.go:89] found id: ""
	I0401 21:04:03.430599   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.430609   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:03.430616   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:03.430675   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:03.485379   61496 cri.go:89] found id: ""
	I0401 21:04:03.485405   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.485415   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:03.485422   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:03.485475   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:03.545711   61496 cri.go:89] found id: ""
	I0401 21:04:03.545745   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.545763   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:03.545770   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:03.545827   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:03.604816   61496 cri.go:89] found id: ""
	I0401 21:04:03.604846   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.604857   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:03.604864   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:03.604921   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:03.654272   61496 cri.go:89] found id: ""
	I0401 21:04:03.654303   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.654314   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:03.654321   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:03.654385   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:03.703558   61496 cri.go:89] found id: ""
	I0401 21:04:03.703588   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.703599   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:03.703606   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:03.703668   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:03.756547   61496 cri.go:89] found id: ""
	I0401 21:04:03.756575   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.756587   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:03.756594   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:03.756654   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:03.808718   61496 cri.go:89] found id: ""
	I0401 21:04:03.808742   61496 logs.go:282] 0 containers: []
	W0401 21:04:03.808752   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:03.808778   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:03.808794   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:03.890122   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:03.890167   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:03.908782   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:03.908816   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:04.042003   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:04.042029   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:04.042046   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:04.159575   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:04.159605   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:06.714541   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:06.727709   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:06.727795   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:06.771164   61496 cri.go:89] found id: ""
	I0401 21:04:06.771191   61496 logs.go:282] 0 containers: []
	W0401 21:04:06.771202   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:06.771209   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:06.771274   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:06.818266   61496 cri.go:89] found id: ""
	I0401 21:04:06.818297   61496 logs.go:282] 0 containers: []
	W0401 21:04:06.818308   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:06.818316   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:06.818376   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:06.865186   61496 cri.go:89] found id: ""
	I0401 21:04:06.865221   61496 logs.go:282] 0 containers: []
	W0401 21:04:06.865233   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:06.865241   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:06.865306   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:06.909792   61496 cri.go:89] found id: ""
	I0401 21:04:06.909846   61496 logs.go:282] 0 containers: []
	W0401 21:04:06.909859   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:06.909866   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:06.909928   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:06.964613   61496 cri.go:89] found id: ""
	I0401 21:04:06.964646   61496 logs.go:282] 0 containers: []
	W0401 21:04:06.964657   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:06.964667   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:06.964728   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:07.011303   61496 cri.go:89] found id: ""
	I0401 21:04:07.011331   61496 logs.go:282] 0 containers: []
	W0401 21:04:07.011344   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:07.011352   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:07.011416   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:07.067155   61496 cri.go:89] found id: ""
	I0401 21:04:07.067188   61496 logs.go:282] 0 containers: []
	W0401 21:04:07.067198   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:07.067205   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:07.067262   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:07.119613   61496 cri.go:89] found id: ""
	I0401 21:04:07.119645   61496 logs.go:282] 0 containers: []
	W0401 21:04:07.119656   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:07.119668   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:07.119686   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:07.221775   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:07.221800   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:07.221815   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:07.345224   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:07.345279   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:07.401512   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:07.401548   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:07.459279   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:07.459317   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:09.978326   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:09.992627   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:09.992709   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:10.034423   61496 cri.go:89] found id: ""
	I0401 21:04:10.034451   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.034460   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:10.034468   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:10.034524   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:10.077288   61496 cri.go:89] found id: ""
	I0401 21:04:10.077318   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.077329   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:10.077341   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:10.077400   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:10.116700   61496 cri.go:89] found id: ""
	I0401 21:04:10.116735   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.116746   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:10.116753   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:10.116805   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:10.163858   61496 cri.go:89] found id: ""
	I0401 21:04:10.163883   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.163893   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:10.163901   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:10.163965   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:10.202881   61496 cri.go:89] found id: ""
	I0401 21:04:10.202924   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.202935   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:10.202942   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:10.203000   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:10.242032   61496 cri.go:89] found id: ""
	I0401 21:04:10.242064   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.242075   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:10.242083   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:10.242145   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:10.282433   61496 cri.go:89] found id: ""
	I0401 21:04:10.282460   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.282470   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:10.282477   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:10.282534   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:10.323694   61496 cri.go:89] found id: ""
	I0401 21:04:10.323718   61496 logs.go:282] 0 containers: []
	W0401 21:04:10.323728   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:10.323739   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:10.323753   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:10.339439   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:10.339470   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:10.417979   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:10.418000   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:10.418014   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:10.501125   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:10.501164   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:10.543725   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:10.543755   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:13.106186   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:13.120217   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:13.120270   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:13.152886   61496 cri.go:89] found id: ""
	I0401 21:04:13.152920   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.152932   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:13.152940   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:13.153005   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:13.188899   61496 cri.go:89] found id: ""
	I0401 21:04:13.188929   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.188939   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:13.188945   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:13.189000   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:13.237547   61496 cri.go:89] found id: ""
	I0401 21:04:13.237577   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.237589   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:13.237597   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:13.237657   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:13.275885   61496 cri.go:89] found id: ""
	I0401 21:04:13.275909   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.275916   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:13.275927   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:13.275985   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:13.311726   61496 cri.go:89] found id: ""
	I0401 21:04:13.311756   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.311767   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:13.311774   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:13.311836   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:13.351885   61496 cri.go:89] found id: ""
	I0401 21:04:13.351909   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.351916   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:13.351923   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:13.351981   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:13.398537   61496 cri.go:89] found id: ""
	I0401 21:04:13.398560   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.398568   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:13.398574   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:13.398626   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:13.431921   61496 cri.go:89] found id: ""
	I0401 21:04:13.431955   61496 logs.go:282] 0 containers: []
	W0401 21:04:13.431966   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:13.431976   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:13.431989   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:13.510513   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:13.510550   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:13.557542   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:13.557574   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:13.610102   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:13.610133   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:13.623970   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:13.624002   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:13.696218   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:16.197906   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:16.219787   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:16.219846   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:16.259090   61496 cri.go:89] found id: ""
	I0401 21:04:16.259117   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.259125   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:16.259133   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:16.259186   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:16.294691   61496 cri.go:89] found id: ""
	I0401 21:04:16.294718   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.294728   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:16.294735   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:16.294788   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:16.329576   61496 cri.go:89] found id: ""
	I0401 21:04:16.329610   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.329619   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:16.329627   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:16.329694   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:16.375316   61496 cri.go:89] found id: ""
	I0401 21:04:16.375349   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.375359   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:16.375378   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:16.375436   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:16.410606   61496 cri.go:89] found id: ""
	I0401 21:04:16.410632   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.410642   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:16.410649   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:16.410705   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:16.447042   61496 cri.go:89] found id: ""
	I0401 21:04:16.447069   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.447080   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:16.447087   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:16.447143   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:16.487372   61496 cri.go:89] found id: ""
	I0401 21:04:16.487396   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.487404   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:16.487409   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:16.487455   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:16.527962   61496 cri.go:89] found id: ""
	I0401 21:04:16.527993   61496 logs.go:282] 0 containers: []
	W0401 21:04:16.528005   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:16.528017   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:16.528031   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:16.574505   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:16.574536   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:16.627787   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:16.627819   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:16.645859   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:16.645893   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:16.724584   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:16.724606   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:16.724621   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:19.306988   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:19.320895   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:19.320964   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:19.356218   61496 cri.go:89] found id: ""
	I0401 21:04:19.356249   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.356260   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:19.356267   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:19.356323   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:19.392863   61496 cri.go:89] found id: ""
	I0401 21:04:19.392898   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.392910   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:19.392918   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:19.392984   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:19.432975   61496 cri.go:89] found id: ""
	I0401 21:04:19.433002   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.433012   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:19.433019   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:19.433072   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:19.476851   61496 cri.go:89] found id: ""
	I0401 21:04:19.476879   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.476891   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:19.476899   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:19.476962   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:19.510725   61496 cri.go:89] found id: ""
	I0401 21:04:19.510760   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.510771   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:19.510779   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:19.510837   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:19.546403   61496 cri.go:89] found id: ""
	I0401 21:04:19.546432   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.546442   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:19.546449   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:19.546509   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:19.582756   61496 cri.go:89] found id: ""
	I0401 21:04:19.582781   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.582791   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:19.582803   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:19.582873   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:19.620708   61496 cri.go:89] found id: ""
	I0401 21:04:19.620735   61496 logs.go:282] 0 containers: []
	W0401 21:04:19.620744   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:19.620754   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:19.620768   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:19.670964   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:19.671000   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:19.686201   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:19.686247   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:19.761962   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:19.761986   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:19.762001   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:19.853730   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:19.853766   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:22.404007   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:22.417213   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:22.417274   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:22.452541   61496 cri.go:89] found id: ""
	I0401 21:04:22.452574   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.452584   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:22.452594   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:22.452654   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:22.487006   61496 cri.go:89] found id: ""
	I0401 21:04:22.487039   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.487047   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:22.487052   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:22.487099   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:22.523590   61496 cri.go:89] found id: ""
	I0401 21:04:22.523623   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.523636   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:22.523644   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:22.523704   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:22.557752   61496 cri.go:89] found id: ""
	I0401 21:04:22.557786   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.557797   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:22.557804   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:22.557871   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:22.592172   61496 cri.go:89] found id: ""
	I0401 21:04:22.592200   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.592208   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:22.592214   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:22.592266   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:22.628121   61496 cri.go:89] found id: ""
	I0401 21:04:22.628154   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.628165   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:22.628172   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:22.628226   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:22.662876   61496 cri.go:89] found id: ""
	I0401 21:04:22.662907   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.662918   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:22.662925   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:22.662984   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:22.698036   61496 cri.go:89] found id: ""
	I0401 21:04:22.698062   61496 logs.go:282] 0 containers: []
	W0401 21:04:22.698072   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:22.698082   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:22.698096   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:22.712135   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:22.712167   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:22.778184   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:22.778204   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:22.778234   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:22.859025   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:22.859061   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:22.898344   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:22.898375   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:25.450328   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:25.467481   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:25.467555   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:25.517625   61496 cri.go:89] found id: ""
	I0401 21:04:25.517652   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.517662   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:25.517669   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:25.517815   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:25.552304   61496 cri.go:89] found id: ""
	I0401 21:04:25.552329   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.552340   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:25.552348   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:25.552403   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:25.587800   61496 cri.go:89] found id: ""
	I0401 21:04:25.587827   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.587834   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:25.587839   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:25.587895   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:25.623870   61496 cri.go:89] found id: ""
	I0401 21:04:25.623899   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.623910   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:25.623918   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:25.623976   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:25.663481   61496 cri.go:89] found id: ""
	I0401 21:04:25.663508   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.663519   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:25.663526   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:25.663580   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:25.709598   61496 cri.go:89] found id: ""
	I0401 21:04:25.709632   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.709645   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:25.709652   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:25.709712   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:25.760577   61496 cri.go:89] found id: ""
	I0401 21:04:25.760622   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.760632   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:25.760639   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:25.760702   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:25.803505   61496 cri.go:89] found id: ""
	I0401 21:04:25.803532   61496 logs.go:282] 0 containers: []
	W0401 21:04:25.803542   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:25.803552   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:25.803566   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:25.908544   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:25.908569   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:25.908587   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:25.999013   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:25.999051   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:26.054059   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:26.054090   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:26.112545   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:26.112581   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:28.632034   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:28.647741   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:28.647816   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:28.687585   61496 cri.go:89] found id: ""
	I0401 21:04:28.687616   61496 logs.go:282] 0 containers: []
	W0401 21:04:28.687626   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:28.687634   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:28.687691   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:28.733683   61496 cri.go:89] found id: ""
	I0401 21:04:28.733712   61496 logs.go:282] 0 containers: []
	W0401 21:04:28.733721   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:28.733728   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:28.733792   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:28.784470   61496 cri.go:89] found id: ""
	I0401 21:04:28.784506   61496 logs.go:282] 0 containers: []
	W0401 21:04:28.784519   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:28.784526   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:28.784585   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:28.827854   61496 cri.go:89] found id: ""
	I0401 21:04:28.827886   61496 logs.go:282] 0 containers: []
	W0401 21:04:28.827897   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:28.827905   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:28.827964   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:28.874258   61496 cri.go:89] found id: ""
	I0401 21:04:28.874284   61496 logs.go:282] 0 containers: []
	W0401 21:04:28.874292   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:28.874297   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:28.874351   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:28.914602   61496 cri.go:89] found id: ""
	I0401 21:04:28.914628   61496 logs.go:282] 0 containers: []
	W0401 21:04:28.914636   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:28.914644   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:28.914701   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:28.961056   61496 cri.go:89] found id: ""
	I0401 21:04:28.961103   61496 logs.go:282] 0 containers: []
	W0401 21:04:28.961115   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:28.961124   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:28.961183   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:29.008342   61496 cri.go:89] found id: ""
	I0401 21:04:29.008371   61496 logs.go:282] 0 containers: []
	W0401 21:04:29.008380   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:29.008390   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:29.008403   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:29.065032   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:29.065064   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:29.082945   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:29.082977   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:29.160265   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:29.160290   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:29.160306   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:29.243505   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:29.243544   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:31.799988   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:31.816940   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:31.817027   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:31.862579   61496 cri.go:89] found id: ""
	I0401 21:04:31.862603   61496 logs.go:282] 0 containers: []
	W0401 21:04:31.862614   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:31.862621   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:31.862672   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:31.898326   61496 cri.go:89] found id: ""
	I0401 21:04:31.898353   61496 logs.go:282] 0 containers: []
	W0401 21:04:31.898360   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:31.898366   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:31.898412   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:31.933878   61496 cri.go:89] found id: ""
	I0401 21:04:31.933903   61496 logs.go:282] 0 containers: []
	W0401 21:04:31.933914   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:31.933920   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:31.933978   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:31.970370   61496 cri.go:89] found id: ""
	I0401 21:04:31.970395   61496 logs.go:282] 0 containers: []
	W0401 21:04:31.970402   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:31.970408   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:31.970464   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:32.012325   61496 cri.go:89] found id: ""
	I0401 21:04:32.012356   61496 logs.go:282] 0 containers: []
	W0401 21:04:32.012368   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:32.012375   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:32.012436   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:32.048675   61496 cri.go:89] found id: ""
	I0401 21:04:32.048701   61496 logs.go:282] 0 containers: []
	W0401 21:04:32.048711   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:32.048717   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:32.048776   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:32.086611   61496 cri.go:89] found id: ""
	I0401 21:04:32.086640   61496 logs.go:282] 0 containers: []
	W0401 21:04:32.086651   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:32.086659   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:32.086715   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:32.125465   61496 cri.go:89] found id: ""
	I0401 21:04:32.125489   61496 logs.go:282] 0 containers: []
	W0401 21:04:32.125510   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:32.125520   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:32.125534   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:32.194379   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:32.194408   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:32.210794   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:32.210836   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:32.321189   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:32.321214   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:32.321228   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:32.417188   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:32.417220   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:34.964001   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:34.977605   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:34.977677   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:35.014196   61496 cri.go:89] found id: ""
	I0401 21:04:35.014244   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.014256   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:35.014265   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:35.014325   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:35.057480   61496 cri.go:89] found id: ""
	I0401 21:04:35.057508   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.057518   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:35.057526   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:35.057592   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:35.095559   61496 cri.go:89] found id: ""
	I0401 21:04:35.095601   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.095612   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:35.095619   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:35.095673   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:35.133959   61496 cri.go:89] found id: ""
	I0401 21:04:35.133987   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.134003   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:35.134011   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:35.134066   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:35.170746   61496 cri.go:89] found id: ""
	I0401 21:04:35.170819   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.170836   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:35.170843   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:35.170909   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:35.207665   61496 cri.go:89] found id: ""
	I0401 21:04:35.207693   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.207704   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:35.207710   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:35.207764   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:35.244377   61496 cri.go:89] found id: ""
	I0401 21:04:35.244403   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.244413   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:35.244420   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:35.244475   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:35.284940   61496 cri.go:89] found id: ""
	I0401 21:04:35.284969   61496 logs.go:282] 0 containers: []
	W0401 21:04:35.284981   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:35.284992   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:35.285005   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:35.344605   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:35.344641   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:35.362154   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:35.362188   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:35.434569   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:35.434594   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:35.434610   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:35.542143   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:35.542192   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:38.094348   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:38.117271   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:38.117334   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:38.174729   61496 cri.go:89] found id: ""
	I0401 21:04:38.174766   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.174777   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:38.174784   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:38.174840   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:38.236976   61496 cri.go:89] found id: ""
	I0401 21:04:38.236998   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.237008   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:38.237015   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:38.237069   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:38.303843   61496 cri.go:89] found id: ""
	I0401 21:04:38.303874   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.303884   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:38.303891   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:38.303957   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:38.351748   61496 cri.go:89] found id: ""
	I0401 21:04:38.351769   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.351775   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:38.351780   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:38.351821   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:38.397954   61496 cri.go:89] found id: ""
	I0401 21:04:38.397981   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.397993   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:38.398012   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:38.398068   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:38.438848   61496 cri.go:89] found id: ""
	I0401 21:04:38.438875   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.438885   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:38.438892   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:38.438949   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:38.482624   61496 cri.go:89] found id: ""
	I0401 21:04:38.482651   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.482662   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:38.482670   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:38.482735   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:38.521521   61496 cri.go:89] found id: ""
	I0401 21:04:38.521553   61496 logs.go:282] 0 containers: []
	W0401 21:04:38.521563   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:38.521573   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:38.521588   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:38.602875   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:38.602901   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:38.602917   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:38.688039   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:38.688081   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:38.733750   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:38.733781   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:38.797849   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:38.797883   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:41.317590   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:41.331195   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:41.331269   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:41.372230   61496 cri.go:89] found id: ""
	I0401 21:04:41.372261   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.372272   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:41.372279   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:41.372341   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:41.414410   61496 cri.go:89] found id: ""
	I0401 21:04:41.414436   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.414447   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:41.414454   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:41.414522   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:41.455627   61496 cri.go:89] found id: ""
	I0401 21:04:41.455654   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.455665   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:41.455672   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:41.455729   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:41.493559   61496 cri.go:89] found id: ""
	I0401 21:04:41.493605   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.493617   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:41.493624   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:41.493683   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:41.534248   61496 cri.go:89] found id: ""
	I0401 21:04:41.534276   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.534286   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:41.534293   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:41.534349   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:41.576595   61496 cri.go:89] found id: ""
	I0401 21:04:41.576622   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.576632   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:41.576639   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:41.576694   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:41.611589   61496 cri.go:89] found id: ""
	I0401 21:04:41.611612   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.611619   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:41.611624   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:41.611669   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:41.655859   61496 cri.go:89] found id: ""
	I0401 21:04:41.655882   61496 logs.go:282] 0 containers: []
	W0401 21:04:41.655889   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:41.655898   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:41.655911   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:41.731346   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:41.731379   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:41.772269   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:41.772301   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:41.822578   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:41.822612   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:41.840038   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:41.840060   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:41.908469   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:44.409258   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:44.423072   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:04:44.423142   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:04:44.464180   61496 cri.go:89] found id: ""
	I0401 21:04:44.464214   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.464225   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:04:44.464233   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:04:44.464288   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:04:44.499453   61496 cri.go:89] found id: ""
	I0401 21:04:44.499487   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.499499   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:04:44.499506   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:04:44.499569   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:04:44.538528   61496 cri.go:89] found id: ""
	I0401 21:04:44.538556   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.538567   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:04:44.538574   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:04:44.538631   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:04:44.576529   61496 cri.go:89] found id: ""
	I0401 21:04:44.576559   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.576569   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:04:44.576576   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:04:44.576642   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:04:44.611217   61496 cri.go:89] found id: ""
	I0401 21:04:44.611247   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.611256   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:04:44.611262   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:04:44.611320   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:04:44.646055   61496 cri.go:89] found id: ""
	I0401 21:04:44.646083   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.646093   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:04:44.646100   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:04:44.646165   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:04:44.685541   61496 cri.go:89] found id: ""
	I0401 21:04:44.685578   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.685586   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:04:44.685592   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:04:44.685635   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:04:44.726940   61496 cri.go:89] found id: ""
	I0401 21:04:44.726976   61496 logs.go:282] 0 containers: []
	W0401 21:04:44.726991   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:04:44.727003   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:04:44.727016   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:04:44.784528   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:04:44.784559   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:04:44.799188   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:04:44.799214   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:04:44.872929   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:04:44.872952   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:04:44.872968   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:04:44.949374   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:04:44.949403   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 21:04:47.498356   61496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:04:47.516256   61496 kubeadm.go:597] duration metric: took 4m4.125322542s to restartPrimaryControlPlane
	W0401 21:04:47.516328   61496 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 21:04:47.516358   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 21:04:49.198962   61496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.682583581s)
	I0401 21:04:49.199024   61496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:04:49.217174   61496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:04:49.233896   61496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:04:49.250258   61496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:04:49.250279   61496 kubeadm.go:157] found existing configuration files:
	
	I0401 21:04:49.250329   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:04:49.265450   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:04:49.265513   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:04:49.281383   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:04:49.296460   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:04:49.296535   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:04:49.311964   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:04:49.326082   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:04:49.326152   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:04:49.341281   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:04:49.352324   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:04:49.352396   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:04:49.364737   61496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:04:49.464296   61496 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 21:04:49.464677   61496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:04:49.685685   61496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:04:49.685831   61496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:04:49.685968   61496 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 21:04:49.960564   61496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:04:49.962195   61496 out.go:235]   - Generating certificates and keys ...
	I0401 21:04:49.962328   61496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:04:49.962784   61496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:04:49.968379   61496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 21:04:49.969441   61496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0401 21:04:49.971135   61496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 21:04:49.972217   61496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0401 21:04:49.977510   61496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0401 21:04:49.979542   61496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0401 21:04:49.980775   61496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 21:04:49.981684   61496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 21:04:49.982146   61496 kubeadm.go:310] [certs] Using the existing "sa" key
	I0401 21:04:49.982338   61496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:04:50.203416   61496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:04:50.501110   61496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:04:50.772618   61496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:04:51.068932   61496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:04:51.091486   61496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:04:51.092586   61496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:04:51.092774   61496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:04:51.294448   61496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:04:51.296331   61496 out.go:235]   - Booting up control plane ...
	I0401 21:04:51.296573   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:04:51.308803   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:04:51.309892   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:04:51.312863   61496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:04:51.316516   61496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 21:05:31.317445   61496 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 21:05:31.318432   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:05:31.318728   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:05:36.319658   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:05:36.319963   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:05:46.320150   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:05:46.320462   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:06:06.321137   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:06:06.321400   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:06:46.323249   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:06:46.323562   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:06:46.323589   61496 kubeadm.go:310] 
	I0401 21:06:46.323671   61496 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 21:06:46.323729   61496 kubeadm.go:310] 		timed out waiting for the condition
	I0401 21:06:46.323739   61496 kubeadm.go:310] 
	I0401 21:06:46.323785   61496 kubeadm.go:310] 	This error is likely caused by:
	I0401 21:06:46.323841   61496 kubeadm.go:310] 		- The kubelet is not running
	I0401 21:06:46.323953   61496 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 21:06:46.323962   61496 kubeadm.go:310] 
	I0401 21:06:46.324059   61496 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 21:06:46.324094   61496 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 21:06:46.324135   61496 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 21:06:46.324142   61496 kubeadm.go:310] 
	I0401 21:06:46.324260   61496 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 21:06:46.324368   61496 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 21:06:46.324379   61496 kubeadm.go:310] 
	I0401 21:06:46.324499   61496 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 21:06:46.324626   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 21:06:46.324736   61496 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 21:06:46.324839   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 21:06:46.324850   61496 kubeadm.go:310] 
	I0401 21:06:46.325595   61496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:06:46.325717   61496 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 21:06:46.325833   61496 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0401 21:06:46.325979   61496 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 21:06:46.326075   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 21:06:48.441208   61496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.115108955s)
	I0401 21:06:48.441274   61496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:06:48.456470   61496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:06:48.466779   61496 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:06:48.466802   61496 kubeadm.go:157] found existing configuration files:
	
	I0401 21:06:48.466851   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:06:48.477240   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:06:48.477300   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:06:48.487486   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:06:48.497231   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:06:48.497277   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:06:48.507239   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:06:48.517156   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:06:48.517203   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:06:48.528653   61496 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:06:48.538526   61496 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:06:48.538588   61496 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:06:48.548693   61496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:06:48.761507   61496 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:08:44.694071   61496 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 21:08:44.694235   61496 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 21:08:44.695734   61496 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 21:08:44.695829   61496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:08:44.695942   61496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:08:44.696082   61496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:08:44.696333   61496 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 21:08:44.696433   61496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:08:44.698422   61496 out.go:235]   - Generating certificates and keys ...
	I0401 21:08:44.698535   61496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:08:44.698622   61496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:08:44.698707   61496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 21:08:44.698782   61496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0401 21:08:44.698848   61496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 21:08:44.698894   61496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0401 21:08:44.698952   61496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0401 21:08:44.699004   61496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0401 21:08:44.699067   61496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 21:08:44.699131   61496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 21:08:44.699164   61496 kubeadm.go:310] [certs] Using the existing "sa" key
	I0401 21:08:44.699213   61496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:08:44.699257   61496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:08:44.699302   61496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:08:44.699360   61496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:08:44.699410   61496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:08:44.699518   61496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:08:44.699595   61496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:08:44.699630   61496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:08:44.699705   61496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:08:44.701085   61496 out.go:235]   - Booting up control plane ...
	I0401 21:08:44.701182   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:08:44.701269   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:08:44.701370   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:08:44.701492   61496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:08:44.701663   61496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 21:08:44.701710   61496 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 21:08:44.701768   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.701969   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702033   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702244   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702341   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702570   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702639   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702818   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702922   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.703238   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.703248   61496 kubeadm.go:310] 
	I0401 21:08:44.703300   61496 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 21:08:44.703339   61496 kubeadm.go:310] 		timed out waiting for the condition
	I0401 21:08:44.703347   61496 kubeadm.go:310] 
	I0401 21:08:44.703393   61496 kubeadm.go:310] 	This error is likely caused by:
	I0401 21:08:44.703424   61496 kubeadm.go:310] 		- The kubelet is not running
	I0401 21:08:44.703575   61496 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 21:08:44.703594   61496 kubeadm.go:310] 
	I0401 21:08:44.703747   61496 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 21:08:44.703797   61496 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 21:08:44.703843   61496 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 21:08:44.703851   61496 kubeadm.go:310] 
	I0401 21:08:44.703979   61496 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 21:08:44.704106   61496 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 21:08:44.704117   61496 kubeadm.go:310] 
	I0401 21:08:44.704223   61496 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 21:08:44.704338   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 21:08:44.704400   61496 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 21:08:44.704458   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 21:08:44.704515   61496 kubeadm.go:394] duration metric: took 8m1.369559682s to StartCluster
	I0401 21:08:44.704550   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:08:44.704601   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:08:44.704607   61496 kubeadm.go:310] 
	I0401 21:08:44.776607   61496 cri.go:89] found id: ""
	I0401 21:08:44.776631   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.776638   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:08:44.776643   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:08:44.776688   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:08:44.822697   61496 cri.go:89] found id: ""
	I0401 21:08:44.822724   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.822732   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:08:44.822737   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:08:44.822789   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:08:44.870855   61496 cri.go:89] found id: ""
	I0401 21:08:44.870884   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.870895   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:08:44.870903   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:08:44.870963   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:08:44.909983   61496 cri.go:89] found id: ""
	I0401 21:08:44.910010   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.910019   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:08:44.910025   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:08:44.910205   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:08:44.947636   61496 cri.go:89] found id: ""
	I0401 21:08:44.947667   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.947677   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:08:44.947684   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:08:44.947742   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:08:44.987225   61496 cri.go:89] found id: ""
	I0401 21:08:44.987254   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.987265   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:08:44.987273   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:08:44.987328   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:08:45.031455   61496 cri.go:89] found id: ""
	I0401 21:08:45.031483   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.031493   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:08:45.031498   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:08:45.031556   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:08:45.073545   61496 cri.go:89] found id: ""
	I0401 21:08:45.073572   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.073582   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:08:45.073593   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:08:45.073604   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:08:45.139059   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:08:45.139110   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:08:45.156271   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:08:45.156309   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:08:45.239654   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:08:45.239682   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:08:45.239697   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:08:45.355473   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:08:45.355501   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0401 21:08:45.401208   61496 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 21:08:45.401255   61496 out.go:270] * 
	* 
	W0401 21:08:45.401306   61496 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.401323   61496 out.go:270] * 
	* 
	W0401 21:08:45.402124   61496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 21:08:45.405265   61496 out.go:201] 
	W0401 21:08:45.406413   61496 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.406448   61496 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 21:08:45.406470   61496 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 21:08:45.407866   61496 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-582207 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (228.550512ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-582207 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo cat                           | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo cat                           | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo cat                           | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 pgrep                       | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | -a kubelet                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo docker                        | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo cat                           | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo cat                           | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo cat                           | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo cat                           | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo                               | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo find                          | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-269490 sudo crio                          | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kindnet-269490                                    | kindnet-269490        | jenkins | v1.35.0 | 01 Apr 25 21:08 UTC | 01 Apr 25 21:08 UTC |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 21:07:20
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 21:07:20.892475   72096 out.go:345] Setting OutFile to fd 1 ...
	I0401 21:07:20.892577   72096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:07:20.892588   72096 out.go:358] Setting ErrFile to fd 2...
	I0401 21:07:20.892592   72096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:07:20.892779   72096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 21:07:20.893387   72096 out.go:352] Setting JSON to false
	I0401 21:07:20.894914   72096 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6585,"bootTime":1743535056,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 21:07:20.895074   72096 start.go:139] virtualization: kvm guest
	I0401 21:07:20.896928   72096 out.go:177] * [custom-flannel-269490] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 21:07:20.898151   72096 notify.go:220] Checking for updates...
	I0401 21:07:20.898184   72096 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 21:07:20.899289   72096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 21:07:20.900337   72096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:07:20.901554   72096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:20.902784   72096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 21:07:20.903866   72096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 21:07:20.905447   72096 config.go:182] Loaded profile config "calico-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:20.905560   72096 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:20.905643   72096 config.go:182] Loaded profile config "old-k8s-version-582207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 21:07:20.905706   72096 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 21:07:20.945212   72096 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 21:07:20.946413   72096 start.go:297] selected driver: kvm2
	I0401 21:07:20.946434   72096 start.go:901] validating driver "kvm2" against <nil>
	I0401 21:07:20.946446   72096 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 21:07:20.947178   72096 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:07:20.947262   72096 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 21:07:20.963919   72096 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 21:07:20.963985   72096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 21:07:20.964232   72096 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:07:20.964268   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:07:20.964285   72096 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0401 21:07:20.964365   72096 start.go:340] cluster config:
	{Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:07:20.964523   72096 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:07:20.966047   72096 out.go:177] * Starting "custom-flannel-269490" primary control-plane node in "custom-flannel-269490" cluster
	I0401 21:07:18.476294   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:18.476788   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find current IP address of domain kindnet-269490 in network mk-kindnet-269490
	I0401 21:07:18.476808   70627 main.go:141] libmachine: (kindnet-269490) DBG | I0401 21:07:18.476765   70649 retry.go:31] will retry after 3.122657647s: waiting for domain to come up
	I0401 21:07:21.603058   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:21.603568   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find current IP address of domain kindnet-269490 in network mk-kindnet-269490
	I0401 21:07:21.603587   70627 main.go:141] libmachine: (kindnet-269490) DBG | I0401 21:07:21.603538   70649 retry.go:31] will retry after 5.429623003s: waiting for domain to come up
	I0401 21:07:19.747355   68904 addons.go:514] duration metric: took 1.377747901s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:07:19.754995   68904 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-269490" context rescaled to 1 replicas
	I0401 21:07:21.254062   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:23.254170   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:20.967052   72096 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:20.967100   72096 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 21:07:20.967109   72096 cache.go:56] Caching tarball of preloaded images
	I0401 21:07:20.967208   72096 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 21:07:20.967221   72096 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 21:07:20.967324   72096 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json ...
	I0401 21:07:20.967350   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json: {Name:mkabbd5fa26c3d0a0e3ad8206cce24911ddf4ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:20.967473   72096 start.go:360] acquireMachinesLock for custom-flannel-269490: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 21:07:27.036122   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.036704   70627 main.go:141] libmachine: (kindnet-269490) found domain IP: 192.168.72.200
	I0401 21:07:27.036728   70627 main.go:141] libmachine: (kindnet-269490) reserving static IP address...
	I0401 21:07:27.036741   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has current primary IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.037062   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find host DHCP lease matching {name: "kindnet-269490", mac: "52:54:00:a7:37:c0", ip: "192.168.72.200"} in network mk-kindnet-269490
	I0401 21:07:27.112813   70627 main.go:141] libmachine: (kindnet-269490) DBG | Getting to WaitForSSH function...
	I0401 21:07:27.112843   70627 main.go:141] libmachine: (kindnet-269490) reserved static IP address 192.168.72.200 for domain kindnet-269490
	I0401 21:07:27.112872   70627 main.go:141] libmachine: (kindnet-269490) waiting for SSH...
	I0401 21:07:27.115323   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.115796   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.115923   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.115950   70627 main.go:141] libmachine: (kindnet-269490) DBG | Using SSH client type: external
	I0401 21:07:27.115972   70627 main.go:141] libmachine: (kindnet-269490) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa (-rw-------)
	I0401 21:07:27.115994   70627 main.go:141] libmachine: (kindnet-269490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 21:07:27.116012   70627 main.go:141] libmachine: (kindnet-269490) DBG | About to run SSH command:
	I0401 21:07:27.116025   70627 main.go:141] libmachine: (kindnet-269490) DBG | exit 0
	I0401 21:07:28.675412   72096 start.go:364] duration metric: took 7.707851568s to acquireMachinesLock for "custom-flannel-269490"
	I0401 21:07:28.675471   72096 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:07:28.675590   72096 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 21:07:25.472985   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:27.253847   68904 node_ready.go:49] node "calico-269490" has status "Ready":"True"
	I0401 21:07:27.253864   68904 node_ready.go:38] duration metric: took 8.003199629s for node "calico-269490" to be "Ready" ...
	I0401 21:07:27.253872   68904 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:07:27.257050   68904 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:27.242376   70627 main.go:141] libmachine: (kindnet-269490) DBG | SSH cmd err, output: <nil>: 
	I0401 21:07:27.242647   70627 main.go:141] libmachine: (kindnet-269490) KVM machine creation complete
	I0401 21:07:27.242954   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetConfigRaw
	I0401 21:07:27.243418   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:27.243604   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:27.243762   70627 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 21:07:27.243775   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:27.245022   70627 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 21:07:27.245035   70627 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 21:07:27.245039   70627 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 21:07:27.245044   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.247141   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.247552   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.247576   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.247767   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.247943   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.248079   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.248204   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.248336   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.248568   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.248579   70627 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 21:07:27.345624   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:27.345651   70627 main.go:141] libmachine: Detecting the provisioner...
	I0401 21:07:27.345668   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.348762   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.349156   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.349177   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.349442   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.349668   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.349845   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.349977   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.350143   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.350384   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.350397   70627 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 21:07:27.455197   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 21:07:27.455275   70627 main.go:141] libmachine: found compatible host: buildroot
	I0401 21:07:27.455286   70627 main.go:141] libmachine: Provisioning with buildroot...
	I0401 21:07:27.455296   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.455573   70627 buildroot.go:166] provisioning hostname "kindnet-269490"
	I0401 21:07:27.455600   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.455807   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.458178   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.458482   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.458501   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.458727   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.458935   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.459090   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.459252   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.459383   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.459600   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.459612   70627 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-269490 && echo "kindnet-269490" | sudo tee /etc/hostname
	I0401 21:07:27.580784   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-269490
	
	I0401 21:07:27.580810   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.583963   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.584471   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.584501   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.584766   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.584991   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.585193   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.585384   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.585564   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.585756   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.585773   70627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-269490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-269490/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-269490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 21:07:27.700731   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:27.700756   70627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 21:07:27.700776   70627 buildroot.go:174] setting up certificates
	I0401 21:07:27.700789   70627 provision.go:84] configureAuth start
	I0401 21:07:27.700807   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.701088   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:27.703973   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.704286   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.704299   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.704491   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.706703   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.707051   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.707076   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.707203   70627 provision.go:143] copyHostCerts
	I0401 21:07:27.707255   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 21:07:27.707265   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 21:07:27.707328   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 21:07:27.707422   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 21:07:27.707429   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 21:07:27.707453   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 21:07:27.707515   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 21:07:27.707522   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 21:07:27.707542   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 21:07:27.707603   70627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.kindnet-269490 san=[127.0.0.1 192.168.72.200 kindnet-269490 localhost minikube]
	I0401 21:07:28.041214   70627 provision.go:177] copyRemoteCerts
	I0401 21:07:28.041272   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 21:07:28.041293   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.044440   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.044786   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.044818   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.044953   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.045179   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.045341   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.045494   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.125273   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 21:07:28.152183   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0401 21:07:28.177383   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 21:07:28.201496   70627 provision.go:87] duration metric: took 500.692247ms to configureAuth
	I0401 21:07:28.201523   70627 buildroot.go:189] setting minikube options for container-runtime
	I0401 21:07:28.201720   70627 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:28.201828   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.204278   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.204623   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.204647   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.204776   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.204980   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.205160   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.205299   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.205448   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:28.205669   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:28.205689   70627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 21:07:28.439140   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 21:07:28.439165   70627 main.go:141] libmachine: Checking connection to Docker...
	I0401 21:07:28.439173   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetURL
	I0401 21:07:28.440485   70627 main.go:141] libmachine: (kindnet-269490) DBG | using libvirt version 6000000
	I0401 21:07:28.442490   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.442845   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.442873   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.443006   70627 main.go:141] libmachine: Docker is up and running!
	I0401 21:07:28.443020   70627 main.go:141] libmachine: Reticulating splines...
	I0401 21:07:28.443027   70627 client.go:171] duration metric: took 26.224912939s to LocalClient.Create
	I0401 21:07:28.443053   70627 start.go:167] duration metric: took 26.224971636s to libmachine.API.Create "kindnet-269490"
	I0401 21:07:28.443076   70627 start.go:293] postStartSetup for "kindnet-269490" (driver="kvm2")
	I0401 21:07:28.443090   70627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 21:07:28.443111   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.443340   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 21:07:28.443361   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.445496   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.445781   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.445819   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.445938   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.446110   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.446250   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.446380   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.527257   70627 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 21:07:28.531876   70627 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 21:07:28.531913   70627 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 21:07:28.531976   70627 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 21:07:28.532079   70627 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 21:07:28.532200   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 21:07:28.542758   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:28.567116   70627 start.go:296] duration metric: took 124.023387ms for postStartSetup
	I0401 21:07:28.567157   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetConfigRaw
	I0401 21:07:28.567744   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:28.570513   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.570890   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.570925   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.571188   70627 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/config.json ...
	I0401 21:07:28.571352   70627 start.go:128] duration metric: took 26.372666304s to createHost
	I0401 21:07:28.571372   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.573625   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.573965   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.573996   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.574106   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.574359   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.574499   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.574645   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.574805   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:28.574999   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:28.575009   70627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 21:07:28.675218   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743541648.630648618
	
	I0401 21:07:28.675244   70627 fix.go:216] guest clock: 1743541648.630648618
	I0401 21:07:28.675251   70627 fix.go:229] Guest: 2025-04-01 21:07:28.630648618 +0000 UTC Remote: 2025-04-01 21:07:28.571362914 +0000 UTC m=+26.497421115 (delta=59.285704ms)
	I0401 21:07:28.675268   70627 fix.go:200] guest clock delta is within tolerance: 59.285704ms
	I0401 21:07:28.675273   70627 start.go:83] releasing machines lock for "kindnet-269490", held for 26.476652376s
	I0401 21:07:28.675294   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.675584   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:28.678529   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.678972   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.679003   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.679129   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679598   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679812   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679913   70627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 21:07:28.679970   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.680010   70627 ssh_runner.go:195] Run: cat /version.json
	I0401 21:07:28.680030   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.682720   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683101   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.683138   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683163   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683249   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.683417   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.683501   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.683531   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683603   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.683739   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.683788   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.683896   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.684046   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.684172   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.769030   70627 ssh_runner.go:195] Run: systemctl --version
	I0401 21:07:28.791882   70627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 21:07:28.961201   70627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 21:07:28.969654   70627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 21:07:28.969728   70627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 21:07:28.986375   70627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 21:07:28.986411   70627 start.go:495] detecting cgroup driver to use...
	I0401 21:07:28.986468   70627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 21:07:29.003118   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 21:07:29.017954   70627 docker.go:217] disabling cri-docker service (if available) ...
	I0401 21:07:29.018024   70627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 21:07:29.039725   70627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 21:07:29.056555   70627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 21:07:29.182669   70627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 21:07:29.336854   70627 docker.go:233] disabling docker service ...
	I0401 21:07:29.336911   70627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 21:07:29.354124   70627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 21:07:29.368340   70627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 21:07:29.535858   70627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 21:07:29.694425   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 21:07:29.713503   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 21:07:29.735749   70627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 21:07:29.735818   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.747810   70627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 21:07:29.747881   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.759913   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.777285   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.793765   70627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 21:07:29.806511   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.821740   70627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.845322   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.860990   70627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 21:07:29.874670   70627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 21:07:29.874736   70627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 21:07:29.893635   70627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 21:07:29.908790   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:30.038485   70627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 21:07:30.156804   70627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 21:07:30.156877   70627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 21:07:30.163177   70627 start.go:563] Will wait 60s for crictl version
	I0401 21:07:30.163270   70627 ssh_runner.go:195] Run: which crictl
	I0401 21:07:30.167977   70627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 21:07:30.229882   70627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 21:07:30.229963   70627 ssh_runner.go:195] Run: crio --version
	I0401 21:07:30.269347   70627 ssh_runner.go:195] Run: crio --version
	I0401 21:07:30.302624   70627 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 21:07:28.677559   72096 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0401 21:07:28.677751   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:28.677822   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:28.694049   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0401 21:07:28.694546   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:28.695167   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:07:28.695195   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:28.695565   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:28.695779   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:28.695920   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:28.696100   72096 start.go:159] libmachine.API.Create for "custom-flannel-269490" (driver="kvm2")
	I0401 21:07:28.696127   72096 client.go:168] LocalClient.Create starting
	I0401 21:07:28.696164   72096 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 21:07:28.696199   72096 main.go:141] libmachine: Decoding PEM data...
	I0401 21:07:28.696217   72096 main.go:141] libmachine: Parsing certificate...
	I0401 21:07:28.696268   72096 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 21:07:28.696301   72096 main.go:141] libmachine: Decoding PEM data...
	I0401 21:07:28.696318   72096 main.go:141] libmachine: Parsing certificate...
	I0401 21:07:28.696344   72096 main.go:141] libmachine: Running pre-create checks...
	I0401 21:07:28.696357   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .PreCreateCheck
	I0401 21:07:28.696663   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:28.697088   72096 main.go:141] libmachine: Creating machine...
	I0401 21:07:28.697104   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Create
	I0401 21:07:28.697278   72096 main.go:141] libmachine: (custom-flannel-269490) creating KVM machine...
	I0401 21:07:28.697294   72096 main.go:141] libmachine: (custom-flannel-269490) creating network...
	I0401 21:07:28.698499   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found existing default KVM network
	I0401 21:07:28.699714   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:28.699559   72184 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201380}
	I0401 21:07:28.699734   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | created network xml: 
	I0401 21:07:28.699747   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | <network>
	I0401 21:07:28.699756   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <name>mk-custom-flannel-269490</name>
	I0401 21:07:28.699772   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <dns enable='no'/>
	I0401 21:07:28.699783   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   
	I0401 21:07:28.699791   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 21:07:28.699801   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |     <dhcp>
	I0401 21:07:28.699814   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 21:07:28.699824   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |     </dhcp>
	I0401 21:07:28.699834   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   </ip>
	I0401 21:07:28.699842   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   
	I0401 21:07:28.699856   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | </network>
	I0401 21:07:28.699866   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | 
	I0401 21:07:28.705387   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | trying to create private KVM network mk-custom-flannel-269490 192.168.39.0/24...
	I0401 21:07:28.781748   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | private KVM network mk-custom-flannel-269490 192.168.39.0/24 created
	I0401 21:07:28.781785   72096 main.go:141] libmachine: (custom-flannel-269490) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 ...
	I0401 21:07:28.781803   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:28.781711   72184 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:28.781825   72096 main.go:141] libmachine: (custom-flannel-269490) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 21:07:28.781872   72096 main.go:141] libmachine: (custom-flannel-269490) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 21:07:29.058600   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.058491   72184 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa...
	I0401 21:07:29.284720   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.284560   72184 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/custom-flannel-269490.rawdisk...
	I0401 21:07:29.284762   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Writing magic tar header
	I0401 21:07:29.284781   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Writing SSH key tar header
	I0401 21:07:29.284790   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.284674   72184 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 ...
	I0401 21:07:29.284799   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490
	I0401 21:07:29.284806   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 21:07:29.284819   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:29.284829   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 21:07:29.284854   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 21:07:29.284877   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 (perms=drwx------)
	I0401 21:07:29.284897   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins
	I0401 21:07:29.284911   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home
	I0401 21:07:29.284916   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | skipping /home - not owner
	I0401 21:07:29.284927   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 21:07:29.284936   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 21:07:29.284947   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 21:07:29.284953   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 21:07:29.284961   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 21:07:29.284970   72096 main.go:141] libmachine: (custom-flannel-269490) creating domain...
	I0401 21:07:29.285984   72096 main.go:141] libmachine: (custom-flannel-269490) define libvirt domain using xml: 
	I0401 21:07:29.286030   72096 main.go:141] libmachine: (custom-flannel-269490) <domain type='kvm'>
	I0401 21:07:29.286042   72096 main.go:141] libmachine: (custom-flannel-269490)   <name>custom-flannel-269490</name>
	I0401 21:07:29.286047   72096 main.go:141] libmachine: (custom-flannel-269490)   <memory unit='MiB'>3072</memory>
	I0401 21:07:29.286087   72096 main.go:141] libmachine: (custom-flannel-269490)   <vcpu>2</vcpu>
	I0401 21:07:29.286134   72096 main.go:141] libmachine: (custom-flannel-269490)   <features>
	I0401 21:07:29.286149   72096 main.go:141] libmachine: (custom-flannel-269490)     <acpi/>
	I0401 21:07:29.286155   72096 main.go:141] libmachine: (custom-flannel-269490)     <apic/>
	I0401 21:07:29.286176   72096 main.go:141] libmachine: (custom-flannel-269490)     <pae/>
	I0401 21:07:29.286193   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286204   72096 main.go:141] libmachine: (custom-flannel-269490)   </features>
	I0401 21:07:29.286232   72096 main.go:141] libmachine: (custom-flannel-269490)   <cpu mode='host-passthrough'>
	I0401 21:07:29.286253   72096 main.go:141] libmachine: (custom-flannel-269490)   
	I0401 21:07:29.286262   72096 main.go:141] libmachine: (custom-flannel-269490)   </cpu>
	I0401 21:07:29.286271   72096 main.go:141] libmachine: (custom-flannel-269490)   <os>
	I0401 21:07:29.286281   72096 main.go:141] libmachine: (custom-flannel-269490)     <type>hvm</type>
	I0401 21:07:29.286291   72096 main.go:141] libmachine: (custom-flannel-269490)     <boot dev='cdrom'/>
	I0401 21:07:29.286299   72096 main.go:141] libmachine: (custom-flannel-269490)     <boot dev='hd'/>
	I0401 21:07:29.286309   72096 main.go:141] libmachine: (custom-flannel-269490)     <bootmenu enable='no'/>
	I0401 21:07:29.286318   72096 main.go:141] libmachine: (custom-flannel-269490)   </os>
	I0401 21:07:29.286327   72096 main.go:141] libmachine: (custom-flannel-269490)   <devices>
	I0401 21:07:29.286336   72096 main.go:141] libmachine: (custom-flannel-269490)     <disk type='file' device='cdrom'>
	I0401 21:07:29.286354   72096 main.go:141] libmachine: (custom-flannel-269490)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/boot2docker.iso'/>
	I0401 21:07:29.286364   72096 main.go:141] libmachine: (custom-flannel-269490)       <target dev='hdc' bus='scsi'/>
	I0401 21:07:29.286374   72096 main.go:141] libmachine: (custom-flannel-269490)       <readonly/>
	I0401 21:07:29.286383   72096 main.go:141] libmachine: (custom-flannel-269490)     </disk>
	I0401 21:07:29.286393   72096 main.go:141] libmachine: (custom-flannel-269490)     <disk type='file' device='disk'>
	I0401 21:07:29.286403   72096 main.go:141] libmachine: (custom-flannel-269490)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 21:07:29.286417   72096 main.go:141] libmachine: (custom-flannel-269490)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/custom-flannel-269490.rawdisk'/>
	I0401 21:07:29.286425   72096 main.go:141] libmachine: (custom-flannel-269490)       <target dev='hda' bus='virtio'/>
	I0401 21:07:29.286439   72096 main.go:141] libmachine: (custom-flannel-269490)     </disk>
	I0401 21:07:29.286454   72096 main.go:141] libmachine: (custom-flannel-269490)     <interface type='network'>
	I0401 21:07:29.286466   72096 main.go:141] libmachine: (custom-flannel-269490)       <source network='mk-custom-flannel-269490'/>
	I0401 21:07:29.286478   72096 main.go:141] libmachine: (custom-flannel-269490)       <model type='virtio'/>
	I0401 21:07:29.286488   72096 main.go:141] libmachine: (custom-flannel-269490)     </interface>
	I0401 21:07:29.286497   72096 main.go:141] libmachine: (custom-flannel-269490)     <interface type='network'>
	I0401 21:07:29.286504   72096 main.go:141] libmachine: (custom-flannel-269490)       <source network='default'/>
	I0401 21:07:29.286528   72096 main.go:141] libmachine: (custom-flannel-269490)       <model type='virtio'/>
	I0401 21:07:29.286549   72096 main.go:141] libmachine: (custom-flannel-269490)     </interface>
	I0401 21:07:29.286563   72096 main.go:141] libmachine: (custom-flannel-269490)     <serial type='pty'>
	I0401 21:07:29.286573   72096 main.go:141] libmachine: (custom-flannel-269490)       <target port='0'/>
	I0401 21:07:29.286581   72096 main.go:141] libmachine: (custom-flannel-269490)     </serial>
	I0401 21:07:29.286603   72096 main.go:141] libmachine: (custom-flannel-269490)     <console type='pty'>
	I0401 21:07:29.286615   72096 main.go:141] libmachine: (custom-flannel-269490)       <target type='serial' port='0'/>
	I0401 21:07:29.286628   72096 main.go:141] libmachine: (custom-flannel-269490)     </console>
	I0401 21:07:29.286640   72096 main.go:141] libmachine: (custom-flannel-269490)     <rng model='virtio'>
	I0401 21:07:29.286652   72096 main.go:141] libmachine: (custom-flannel-269490)       <backend model='random'>/dev/random</backend>
	I0401 21:07:29.286663   72096 main.go:141] libmachine: (custom-flannel-269490)     </rng>
	I0401 21:07:29.286669   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286680   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286686   72096 main.go:141] libmachine: (custom-flannel-269490)   </devices>
	I0401 21:07:29.286706   72096 main.go:141] libmachine: (custom-flannel-269490) </domain>
	I0401 21:07:29.286723   72096 main.go:141] libmachine: (custom-flannel-269490) 
	I0401 21:07:29.290865   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:8b:9d:ef in network default
	I0401 21:07:29.291399   72096 main.go:141] libmachine: (custom-flannel-269490) starting domain...
	I0401 21:07:29.291422   72096 main.go:141] libmachine: (custom-flannel-269490) ensuring networks are active...
	I0401 21:07:29.291433   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:29.291982   72096 main.go:141] libmachine: (custom-flannel-269490) Ensuring network default is active
	I0401 21:07:29.292311   72096 main.go:141] libmachine: (custom-flannel-269490) Ensuring network mk-custom-flannel-269490 is active
	I0401 21:07:29.292850   72096 main.go:141] libmachine: (custom-flannel-269490) getting domain XML...
	I0401 21:07:29.293579   72096 main.go:141] libmachine: (custom-flannel-269490) creating domain...
	I0401 21:07:30.303928   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:30.307187   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:30.307572   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:30.307599   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:30.307851   70627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 21:07:30.312717   70627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:30.329656   70627 kubeadm.go:883] updating cluster {Name:kindnet-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 21:07:30.329769   70627 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:30.329840   70627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:30.373808   70627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 21:07:30.373892   70627 ssh_runner.go:195] Run: which lz4
	I0401 21:07:30.379933   70627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 21:07:30.385901   70627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 21:07:30.385939   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0401 21:07:32.049587   70627 crio.go:462] duration metric: took 1.669696993s to copy over tarball
	I0401 21:07:32.049659   70627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 21:07:29.263832   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:31.264708   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:33.769708   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:30.943467   72096 main.go:141] libmachine: (custom-flannel-269490) waiting for IP...
	I0401 21:07:30.944501   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:30.945048   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:30.945154   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:30.945061   72184 retry.go:31] will retry after 194.088864ms: waiting for domain to come up
	I0401 21:07:31.141228   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.142003   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.142032   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.141987   72184 retry.go:31] will retry after 322.526555ms: waiting for domain to come up
	I0401 21:07:31.466493   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.467103   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.467136   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.467085   72184 retry.go:31] will retry after 362.158292ms: waiting for domain to come up
	I0401 21:07:31.830645   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.831272   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.831294   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.831181   72184 retry.go:31] will retry after 507.010873ms: waiting for domain to come up
	I0401 21:07:32.340049   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:32.340688   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:32.340721   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:32.340672   72184 retry.go:31] will retry after 549.764239ms: waiting for domain to come up
	I0401 21:07:32.892498   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:32.893048   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:32.893109   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:32.893038   72184 retry.go:31] will retry after 893.566953ms: waiting for domain to come up
	I0401 21:07:33.788648   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:33.789231   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:33.789313   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:33.789217   72184 retry.go:31] will retry after 1.073160889s: waiting for domain to come up
	I0401 21:07:34.863948   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:34.864715   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:34.864744   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:34.864686   72184 retry.go:31] will retry after 1.137676024s: waiting for domain to come up
	I0401 21:07:34.855116   70627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.805424084s)
	I0401 21:07:34.855163   70627 crio.go:469] duration metric: took 2.805546758s to extract the tarball
	I0401 21:07:34.855174   70627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 21:07:34.908880   70627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:34.967377   70627 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 21:07:34.967406   70627 cache_images.go:84] Images are preloaded, skipping loading
	I0401 21:07:34.967416   70627 kubeadm.go:934] updating node { 192.168.72.200 8443 v1.32.2 crio true true} ...
	I0401 21:07:34.967548   70627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-269490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0401 21:07:34.967631   70627 ssh_runner.go:195] Run: crio config
	I0401 21:07:35.020670   70627 cni.go:84] Creating CNI manager for "kindnet"
	I0401 21:07:35.020696   70627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 21:07:35.020718   70627 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-269490 NodeName:kindnet-269490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 21:07:35.020839   70627 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-269490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 21:07:35.020907   70627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 21:07:35.030866   70627 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 21:07:35.030991   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 21:07:35.040113   70627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0401 21:07:35.058011   70627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 21:07:35.078574   70627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0401 21:07:35.098427   70627 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0401 21:07:35.103690   70627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:35.120443   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:35.277665   70627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:07:35.301275   70627 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490 for IP: 192.168.72.200
	I0401 21:07:35.301301   70627 certs.go:194] generating shared ca certs ...
	I0401 21:07:35.301323   70627 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:35.301486   70627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 21:07:35.301544   70627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 21:07:35.301556   70627 certs.go:256] generating profile certs ...
	I0401 21:07:35.301622   70627 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key
	I0401 21:07:35.301645   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt with IP's: []
	I0401 21:07:36.000768   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt ...
	I0401 21:07:36.000802   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: {Name:mk04a99f27c2f056a29fa36354c47c3222966cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.001003   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key ...
	I0401 21:07:36.001020   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key: {Name:mk5444fb90b1ff0a0c80a111598fb1ccc67e25fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.001135   70627 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5
	I0401 21:07:36.001155   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0401 21:07:36.090552   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 ...
	I0401 21:07:36.090588   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5: {Name:mk69f7dd622b7c419828c04f6ea380483c101940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.090767   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5 ...
	I0401 21:07:36.090785   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5: {Name:mkeaf32ff9453aef850a761332e7f9bb6dfc5cad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.090885   70627 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt
	I0401 21:07:36.090977   70627 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key
	I0401 21:07:36.091055   70627 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key
	I0401 21:07:36.091075   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt with IP's: []
	I0401 21:07:36.356603   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt ...
	I0401 21:07:36.356633   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt: {Name:mk053c71ff066a03a7f917f8347cef707651c156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.356813   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key ...
	I0401 21:07:36.356831   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key: {Name:mk7c401e3c137a1d374bd407e8454dc99cff1e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.357017   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 21:07:36.357068   70627 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 21:07:36.357083   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 21:07:36.357115   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 21:07:36.357170   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 21:07:36.357210   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 21:07:36.357269   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:36.357829   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 21:07:36.391336   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 21:07:36.425083   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 21:07:36.457892   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 21:07:36.492019   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 21:07:36.522365   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 21:07:36.547296   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 21:07:36.572536   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 21:07:36.598460   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 21:07:36.628401   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 21:07:36.658521   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 21:07:36.689061   70627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 21:07:36.714997   70627 ssh_runner.go:195] Run: openssl version
	I0401 21:07:36.723421   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 21:07:36.739419   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.745825   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.745888   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.754721   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 21:07:36.771512   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 21:07:36.789799   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.796727   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.796800   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.810295   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 21:07:36.824556   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 21:07:36.839972   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.847132   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.847202   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.854129   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 21:07:36.868264   70627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 21:07:36.873005   70627 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 21:07:36.873058   70627 kubeadm.go:392] StartCluster: {Name:kindnet-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:07:36.873147   70627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 21:07:36.873204   70627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:07:36.917357   70627 cri.go:89] found id: ""
	I0401 21:07:36.917434   70627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 21:07:36.928432   70627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:07:36.939322   70627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:07:36.949948   70627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:07:36.949975   70627 kubeadm.go:157] found existing configuration files:
	
	I0401 21:07:36.950027   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:07:36.959903   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:07:36.959979   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:07:36.970704   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:07:36.980434   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:07:36.980531   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:07:36.994176   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:07:37.007180   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:07:37.007238   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:07:37.017875   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:07:37.028242   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:07:37.028303   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:07:37.038869   70627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:07:37.095127   70627 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 21:07:37.095194   70627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:07:37.220077   70627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:07:37.220198   70627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:07:37.220346   70627 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 21:07:37.232593   70627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:07:38.460012   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:37.363938   70627 out.go:235]   - Generating certificates and keys ...
	I0401 21:07:37.364091   70627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:07:37.364186   70627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:07:37.410466   70627 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 21:07:37.746651   70627 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 21:07:38.065662   70627 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 21:07:38.284383   70627 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 21:07:38.672088   70627 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 21:07:38.672441   70627 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-269490 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0401 21:07:39.029897   70627 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 21:07:39.030235   70627 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-269490 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0401 21:07:39.197549   70627 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 21:07:39.291766   70627 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 21:07:39.461667   70627 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 21:07:39.461915   70627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:07:39.598656   70627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:07:39.836507   70627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 21:07:40.087046   70627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:07:40.167057   70627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:07:40.493658   70627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:07:40.494572   70627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:07:40.497003   70627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:07:36.004129   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:36.004736   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:36.004770   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:36.004691   72184 retry.go:31] will retry after 1.398747795s: waiting for domain to come up
	I0401 21:07:37.404982   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:37.405521   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:37.405562   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:37.405494   72184 retry.go:31] will retry after 1.806073182s: waiting for domain to come up
	I0401 21:07:39.213342   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:39.213908   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:39.213933   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:39.213880   72184 retry.go:31] will retry after 2.187010311s: waiting for domain to come up
	I0401 21:07:40.498949   70627 out.go:235]   - Booting up control plane ...
	I0401 21:07:40.499089   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:07:40.500823   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:07:40.502736   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:07:40.520810   70627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:07:40.529515   70627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:07:40.529647   70627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:07:40.738046   70627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 21:07:40.738253   70627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 21:07:41.738936   70627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001264949s
	I0401 21:07:41.739064   70627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 21:07:40.766840   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:42.802475   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:41.402690   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:41.403302   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:41.403328   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:41.403246   72184 retry.go:31] will retry after 2.956512585s: waiting for domain to come up
	I0401 21:07:44.361436   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:44.362043   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:44.362067   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:44.362014   72184 retry.go:31] will retry after 3.563399146s: waiting for domain to come up
	I0401 21:07:47.241056   70627 kubeadm.go:310] [api-check] The API server is healthy after 5.503493954s
	I0401 21:07:47.253704   70627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 21:07:47.270641   70627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 21:07:47.300541   70627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 21:07:47.300816   70627 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-269490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 21:07:47.320561   70627 kubeadm.go:310] [bootstrap-token] Using token: xu4lw3.orewvhbjfn5oas79
	I0401 21:07:47.322197   70627 out.go:235]   - Configuring RBAC rules ...
	I0401 21:07:47.322340   70627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 21:07:47.327157   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 21:07:47.334751   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 21:07:47.338556   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 21:07:47.342546   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 21:07:47.349586   70627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 21:07:47.650929   70627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 21:07:48.074376   70627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 21:07:48.652551   70627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 21:07:48.652572   70627 kubeadm.go:310] 
	I0401 21:07:48.652631   70627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 21:07:48.652637   70627 kubeadm.go:310] 
	I0401 21:07:48.652746   70627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 21:07:48.652757   70627 kubeadm.go:310] 
	I0401 21:07:48.652792   70627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 21:07:48.652887   70627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 21:07:48.652979   70627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 21:07:48.652989   70627 kubeadm.go:310] 
	I0401 21:07:48.653048   70627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 21:07:48.653063   70627 kubeadm.go:310] 
	I0401 21:07:48.653137   70627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 21:07:48.653146   70627 kubeadm.go:310] 
	I0401 21:07:48.653225   70627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 21:07:48.653312   70627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 21:07:48.653407   70627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 21:07:48.653421   70627 kubeadm.go:310] 
	I0401 21:07:48.653547   70627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 21:07:48.653624   70627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 21:07:48.653630   70627 kubeadm.go:310] 
	I0401 21:07:48.653714   70627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xu4lw3.orewvhbjfn5oas79 \
	I0401 21:07:48.653861   70627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 \
	I0401 21:07:48.653901   70627 kubeadm.go:310] 	--control-plane 
	I0401 21:07:48.653911   70627 kubeadm.go:310] 
	I0401 21:07:48.653996   70627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 21:07:48.654008   70627 kubeadm.go:310] 
	I0401 21:07:48.654074   70627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xu4lw3.orewvhbjfn5oas79 \
	I0401 21:07:48.654207   70627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 
	I0401 21:07:48.654854   70627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:07:48.654879   70627 cni.go:84] Creating CNI manager for "kindnet"
	I0401 21:07:48.656486   70627 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 21:07:45.262936   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:46.762301   68904 pod_ready.go:93] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:46.762325   68904 pod_ready.go:82] duration metric: took 19.505245826s for pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:46.762339   68904 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-8lpnw" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:48.770589   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:47.927525   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:47.928071   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:47.928097   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:47.928026   72184 retry.go:31] will retry after 4.622496999s: waiting for domain to come up
	I0401 21:07:48.657874   70627 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 21:07:48.663855   70627 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 21:07:48.663882   70627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 21:07:48.684916   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 21:07:48.983530   70627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 21:07:48.983634   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:48.983651   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-269490 minikube.k8s.io/updated_at=2025_04_01T21_07_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=kindnet-269490 minikube.k8s.io/primary=true
	I0401 21:07:49.169988   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:49.170019   70627 ops.go:34] apiserver oom_adj: -16
	I0401 21:07:49.670692   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:50.170288   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:50.670668   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.170790   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.670642   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.775301   70627 kubeadm.go:1113] duration metric: took 2.791727937s to wait for elevateKubeSystemPrivileges
	I0401 21:07:51.775340   70627 kubeadm.go:394] duration metric: took 14.902284629s to StartCluster
	I0401 21:07:51.775359   70627 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:51.775433   70627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:07:51.776414   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:51.776667   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 21:07:51.776684   70627 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 21:07:51.776663   70627 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:07:51.776751   70627 addons.go:69] Setting storage-provisioner=true in profile "kindnet-269490"
	I0401 21:07:51.776767   70627 addons.go:238] Setting addon storage-provisioner=true in "kindnet-269490"
	I0401 21:07:51.776791   70627 host.go:66] Checking if "kindnet-269490" exists ...
	I0401 21:07:51.776801   70627 addons.go:69] Setting default-storageclass=true in profile "kindnet-269490"
	I0401 21:07:51.776821   70627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-269490"
	I0401 21:07:51.776876   70627 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:51.777230   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.777253   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.777275   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.777285   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.779535   70627 out.go:177] * Verifying Kubernetes components...
	I0401 21:07:51.780894   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:51.792573   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38081
	I0401 21:07:51.792618   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0401 21:07:51.793016   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.793065   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.793480   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.793504   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.793657   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.793680   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.794003   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.794035   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.794177   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.794522   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.794562   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.797467   70627 addons.go:238] Setting addon default-storageclass=true in "kindnet-269490"
	I0401 21:07:51.797509   70627 host.go:66] Checking if "kindnet-269490" exists ...
	I0401 21:07:51.797754   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.797788   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.812436   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0401 21:07:51.812455   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0401 21:07:51.812907   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.812960   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.813461   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.813479   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.813561   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.813576   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.813844   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.813927   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.813972   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.814617   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.814659   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.815559   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:51.818041   70627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:07:51.819387   70627 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:07:51.819404   70627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 21:07:51.819419   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:51.822051   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.822524   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:51.822549   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.822659   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:51.822828   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:51.822959   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:51.823080   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:51.830521   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44007
	I0401 21:07:51.830922   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.831277   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.831300   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.831604   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.831734   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.833172   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:51.833423   70627 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 21:07:51.833437   70627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 21:07:51.833452   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:51.835920   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.836208   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:51.836233   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.836310   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:51.836491   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:51.836611   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:51.836740   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:51.962702   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 21:07:51.987403   70627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:07:52.104000   70627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 21:07:52.189058   70627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:07:52.363088   70627 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0401 21:07:52.364340   70627 node_ready.go:35] waiting up to 15m0s for node "kindnet-269490" to be "Ready" ...
	I0401 21:07:52.440087   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.440110   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.440411   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.440428   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.440442   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.440451   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.440672   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.440687   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.440718   70627 main.go:141] libmachine: (kindnet-269490) DBG | Closing plugin on server side
	I0401 21:07:52.497451   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.497484   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.497812   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.497831   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.884016   70627 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-269490" context rescaled to 1 replicas
	I0401 21:07:52.961084   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.961107   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.961382   70627 main.go:141] libmachine: (kindnet-269490) DBG | Closing plugin on server side
	I0401 21:07:52.961424   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.961437   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.961455   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.961466   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.961684   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.961700   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.963918   70627 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 21:07:51.268101   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:53.269079   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:53.768113   68904 pod_ready.go:93] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.768135   68904 pod_ready.go:82] duration metric: took 7.005790357s for pod "calico-node-8lpnw" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.768143   68904 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.772369   68904 pod_ready.go:93] pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.772394   68904 pod_ready.go:82] duration metric: took 4.243794ms for pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.772406   68904 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.777208   68904 pod_ready.go:93] pod "etcd-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.777228   68904 pod_ready.go:82] duration metric: took 4.815519ms for pod "etcd-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.777237   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.780965   68904 pod_ready.go:93] pod "kube-apiserver-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.780986   68904 pod_ready.go:82] duration metric: took 3.742662ms for pod "kube-apiserver-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.780997   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.785450   68904 pod_ready.go:93] pod "kube-controller-manager-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.785473   68904 pod_ready.go:82] duration metric: took 4.467871ms for pod "kube-controller-manager-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.785484   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-clkkm" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.166123   68904 pod_ready.go:93] pod "kube-proxy-clkkm" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:54.166149   68904 pod_ready.go:82] duration metric: took 380.656026ms for pod "kube-proxy-clkkm" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.166161   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.567079   68904 pod_ready.go:93] pod "kube-scheduler-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:54.567105   68904 pod_ready.go:82] duration metric: took 400.93599ms for pod "kube-scheduler-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.567118   68904 pod_ready.go:39] duration metric: took 27.313232071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:07:54.567135   68904 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:07:54.567190   68904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:07:54.583839   68904 api_server.go:72] duration metric: took 36.214254974s to wait for apiserver process to appear ...
	I0401 21:07:54.583866   68904 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:07:54.583887   68904 api_server.go:253] Checking apiserver healthz at https://192.168.61.102:8443/healthz ...
	I0401 21:07:54.588495   68904 api_server.go:279] https://192.168.61.102:8443/healthz returned 200:
	ok
	I0401 21:07:54.589645   68904 api_server.go:141] control plane version: v1.32.2
	I0401 21:07:54.589671   68904 api_server.go:131] duration metric: took 5.795827ms to wait for apiserver health ...
	I0401 21:07:54.589681   68904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:07:54.767449   68904 system_pods.go:59] 9 kube-system pods found
	I0401 21:07:54.767492   68904 system_pods.go:61] "calico-kube-controllers-77969b7d87-64swg" [34a618ff-c7cd-447e-9ef9-32357bcf9e42] Running
	I0401 21:07:54.767499   68904 system_pods.go:61] "calico-node-8lpnw" [75dee764-9af1-4f9d-8248-8f333c9b3a75] Running
	I0401 21:07:54.767503   68904 system_pods.go:61] "coredns-668d6bf9bc-mn944" [fb12f605-c79b-4cdf-92c3-2a3bf9319b9f] Running
	I0401 21:07:54.767507   68904 system_pods.go:61] "etcd-calico-269490" [60128f13-ff1b-43d3-9577-30032cbc1224] Running
	I0401 21:07:54.767510   68904 system_pods.go:61] "kube-apiserver-calico-269490" [7bc4e2df-17c3-4c16-8fc4-6cbd4d194757] Running
	I0401 21:07:54.767513   68904 system_pods.go:61] "kube-controller-manager-calico-269490" [bada65a1-db90-4fe8-b3da-f55647a2a5f5] Running
	I0401 21:07:54.767516   68904 system_pods.go:61] "kube-proxy-clkkm" [20def08e-d6ad-4685-91cf-658019584c13] Running
	I0401 21:07:54.767519   68904 system_pods.go:61] "kube-scheduler-calico-269490" [02f99ab0-d476-4e0a-b12b-b62d8fded21c] Running
	I0401 21:07:54.767522   68904 system_pods.go:61] "storage-provisioner" [dea0b01b-b565-4ea8-b740-28125b3c579c] Running
	I0401 21:07:54.767528   68904 system_pods.go:74] duration metric: took 177.841641ms to wait for pod list to return data ...
	I0401 21:07:54.767537   68904 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:07:54.967440   68904 default_sa.go:45] found service account: "default"
	I0401 21:07:54.967473   68904 default_sa.go:55] duration metric: took 199.928997ms for default service account to be created ...
	I0401 21:07:54.967485   68904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:07:55.168431   68904 system_pods.go:86] 9 kube-system pods found
	I0401 21:07:55.168456   68904 system_pods.go:89] "calico-kube-controllers-77969b7d87-64swg" [34a618ff-c7cd-447e-9ef9-32357bcf9e42] Running
	I0401 21:07:55.168462   68904 system_pods.go:89] "calico-node-8lpnw" [75dee764-9af1-4f9d-8248-8f333c9b3a75] Running
	I0401 21:07:55.168466   68904 system_pods.go:89] "coredns-668d6bf9bc-mn944" [fb12f605-c79b-4cdf-92c3-2a3bf9319b9f] Running
	I0401 21:07:55.168469   68904 system_pods.go:89] "etcd-calico-269490" [60128f13-ff1b-43d3-9577-30032cbc1224] Running
	I0401 21:07:55.168472   68904 system_pods.go:89] "kube-apiserver-calico-269490" [7bc4e2df-17c3-4c16-8fc4-6cbd4d194757] Running
	I0401 21:07:55.168475   68904 system_pods.go:89] "kube-controller-manager-calico-269490" [bada65a1-db90-4fe8-b3da-f55647a2a5f5] Running
	I0401 21:07:55.168478   68904 system_pods.go:89] "kube-proxy-clkkm" [20def08e-d6ad-4685-91cf-658019584c13] Running
	I0401 21:07:55.168481   68904 system_pods.go:89] "kube-scheduler-calico-269490" [02f99ab0-d476-4e0a-b12b-b62d8fded21c] Running
	I0401 21:07:55.168484   68904 system_pods.go:89] "storage-provisioner" [dea0b01b-b565-4ea8-b740-28125b3c579c] Running
	I0401 21:07:55.168490   68904 system_pods.go:126] duration metric: took 200.999651ms to wait for k8s-apps to be running ...
	I0401 21:07:55.168499   68904 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:07:55.168548   68904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:07:55.186472   68904 system_svc.go:56] duration metric: took 17.963992ms WaitForService to wait for kubelet
	I0401 21:07:55.186500   68904 kubeadm.go:582] duration metric: took 36.816918566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:07:55.186519   68904 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:07:55.366862   68904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:07:55.366898   68904 node_conditions.go:123] node cpu capacity is 2
	I0401 21:07:55.366915   68904 node_conditions.go:105] duration metric: took 180.387995ms to run NodePressure ...
	I0401 21:07:55.366931   68904 start.go:241] waiting for startup goroutines ...
	I0401 21:07:55.366942   68904 start.go:246] waiting for cluster config update ...
	I0401 21:07:55.366957   68904 start.go:255] writing updated cluster config ...
	I0401 21:07:55.367292   68904 ssh_runner.go:195] Run: rm -f paused
	I0401 21:07:55.418044   68904 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:07:55.419536   68904 out.go:177] * Done! kubectl is now configured to use "calico-269490" cluster and "default" namespace by default
	I0401 21:07:52.552419   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.553020   72096 main.go:141] libmachine: (custom-flannel-269490) found domain IP: 192.168.39.115
	I0401 21:07:52.553043   72096 main.go:141] libmachine: (custom-flannel-269490) reserving static IP address...
	I0401 21:07:52.553055   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has current primary IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.553551   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find host DHCP lease matching {name: "custom-flannel-269490", mac: "52:54:00:bc:3c:1b", ip: "192.168.39.115"} in network mk-custom-flannel-269490
	I0401 21:07:52.633446   72096 main.go:141] libmachine: (custom-flannel-269490) reserved static IP address 192.168.39.115 for domain custom-flannel-269490
	I0401 21:07:52.633469   72096 main.go:141] libmachine: (custom-flannel-269490) waiting for SSH...
	I0401 21:07:52.633478   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Getting to WaitForSSH function...
	I0401 21:07:52.636801   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.637228   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.637263   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.637457   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Using SSH client type: external
	I0401 21:07:52.637483   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa (-rw-------)
	I0401 21:07:52.637524   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 21:07:52.637538   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | About to run SSH command:
	I0401 21:07:52.637570   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | exit 0
	I0401 21:07:52.767648   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | SSH cmd err, output: <nil>: 
	I0401 21:07:52.767922   72096 main.go:141] libmachine: (custom-flannel-269490) KVM machine creation complete
	I0401 21:07:52.768285   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:52.769401   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:52.769639   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:52.769839   72096 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 21:07:52.769855   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:07:52.771616   72096 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 21:07:52.771628   72096 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 21:07:52.771640   72096 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 21:07:52.771646   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:52.773957   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.774313   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.774339   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.774551   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:52.774732   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.774869   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.775003   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:52.775127   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:52.775341   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:52.775351   72096 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 21:07:52.885967   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:52.885995   72096 main.go:141] libmachine: Detecting the provisioner...
	I0401 21:07:52.886036   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:52.889797   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.890333   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.890380   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.890594   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:52.890795   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.891024   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.891176   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:52.891385   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:52.891599   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:52.891613   72096 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 21:07:52.999399   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 21:07:52.999480   72096 main.go:141] libmachine: found compatible host: buildroot
	I0401 21:07:52.999494   72096 main.go:141] libmachine: Provisioning with buildroot...
	I0401 21:07:52.999506   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:52.999737   72096 buildroot.go:166] provisioning hostname "custom-flannel-269490"
	I0401 21:07:52.999763   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:52.999983   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.002673   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.003040   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.003073   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.003201   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.003383   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.003531   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.003684   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.003853   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.004063   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.004074   72096 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-269490 && echo "custom-flannel-269490" | sudo tee /etc/hostname
	I0401 21:07:53.127662   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-269490
	
	I0401 21:07:53.127688   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.130650   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.131060   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.131088   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.131247   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.131442   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.131605   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.131748   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.131909   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.132149   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.132167   72096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-269490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-269490/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-269490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 21:07:53.247895   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:53.247927   72096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 21:07:53.247979   72096 buildroot.go:174] setting up certificates
	I0401 21:07:53.247998   72096 provision.go:84] configureAuth start
	I0401 21:07:53.248027   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:53.248299   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:53.251231   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.251683   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.251709   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.251871   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.254321   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.254634   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.254653   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.254785   72096 provision.go:143] copyHostCerts
	I0401 21:07:53.254838   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 21:07:53.254869   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 21:07:53.254963   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 21:07:53.255070   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 21:07:53.255080   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 21:07:53.255101   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 21:07:53.255172   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 21:07:53.255181   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 21:07:53.255206   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 21:07:53.255307   72096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-269490 san=[127.0.0.1 192.168.39.115 custom-flannel-269490 localhost minikube]
	I0401 21:07:53.423568   72096 provision.go:177] copyRemoteCerts
	I0401 21:07:53.423622   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 21:07:53.423644   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.426471   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.426823   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.426852   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.427026   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.427209   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.427437   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.427602   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:53.508573   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 21:07:53.534446   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 21:07:53.561750   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0401 21:07:53.586361   72096 provision.go:87] duration metric: took 338.347084ms to configureAuth
	I0401 21:07:53.586388   72096 buildroot.go:189] setting minikube options for container-runtime
	I0401 21:07:53.586608   72096 config.go:182] Loaded profile config "custom-flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:53.586686   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.589262   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.589618   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.589647   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.589793   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.589985   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.590141   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.590283   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.590430   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.590630   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.590647   72096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 21:07:53.833008   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 21:07:53.833038   72096 main.go:141] libmachine: Checking connection to Docker...
	I0401 21:07:53.833049   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetURL
	I0401 21:07:53.834302   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | using libvirt version 6000000
	I0401 21:07:53.836570   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.836875   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.836903   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.837075   72096 main.go:141] libmachine: Docker is up and running!
	I0401 21:07:53.837093   72096 main.go:141] libmachine: Reticulating splines...
	I0401 21:07:53.837101   72096 client.go:171] duration metric: took 25.140961475s to LocalClient.Create
	I0401 21:07:53.837125   72096 start.go:167] duration metric: took 25.141025877s to libmachine.API.Create "custom-flannel-269490"
	I0401 21:07:53.837139   72096 start.go:293] postStartSetup for "custom-flannel-269490" (driver="kvm2")
	I0401 21:07:53.837151   72096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 21:07:53.837182   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:53.837406   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 21:07:53.837430   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.839674   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.839944   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.839977   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.840131   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.840293   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.840438   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.840600   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:53.925709   72096 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 21:07:53.930726   72096 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 21:07:53.930754   72096 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 21:07:53.930830   72096 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 21:07:53.930898   72096 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 21:07:53.931007   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 21:07:53.941164   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:53.967167   72096 start.go:296] duration metric: took 130.01591ms for postStartSetup
	I0401 21:07:53.967217   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:53.967908   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:53.970732   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.971053   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.971088   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.971318   72096 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json ...
	I0401 21:07:53.971510   72096 start.go:128] duration metric: took 25.295908261s to createHost
	I0401 21:07:53.971537   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.973863   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.974196   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.974232   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.974386   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.974599   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.974774   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.974910   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.975100   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.975291   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.975302   72096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 21:07:54.083312   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743541674.029447156
	
	I0401 21:07:54.083342   72096 fix.go:216] guest clock: 1743541674.029447156
	I0401 21:07:54.083352   72096 fix.go:229] Guest: 2025-04-01 21:07:54.029447156 +0000 UTC Remote: 2025-04-01 21:07:53.971522792 +0000 UTC m=+33.113971403 (delta=57.924364ms)
	I0401 21:07:54.083375   72096 fix.go:200] guest clock delta is within tolerance: 57.924364ms
	I0401 21:07:54.083382   72096 start.go:83] releasing machines lock for "custom-flannel-269490", held for 25.407944503s
	I0401 21:07:54.083403   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.083645   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:54.086274   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.086622   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.086664   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.086836   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087440   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087609   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087702   72096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 21:07:54.087739   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:54.087821   72096 ssh_runner.go:195] Run: cat /version.json
	I0401 21:07:54.087841   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:54.090554   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.090879   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.090964   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.090990   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.091165   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:54.091298   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.091302   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:54.091344   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.091468   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:54.091525   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:54.091593   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:54.091664   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:54.091714   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:54.091847   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:54.193585   72096 ssh_runner.go:195] Run: systemctl --version
	I0401 21:07:54.199802   72096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 21:07:54.362009   72096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 21:07:54.369775   72096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 21:07:54.369842   72096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 21:07:54.392464   72096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 21:07:54.392493   72096 start.go:495] detecting cgroup driver to use...
	I0401 21:07:54.392575   72096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 21:07:54.415229   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 21:07:54.430169   72096 docker.go:217] disabling cri-docker service (if available) ...
	I0401 21:07:54.430260   72096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 21:07:54.446557   72096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 21:07:54.462441   72096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 21:07:54.581314   72096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 21:07:54.782985   72096 docker.go:233] disabling docker service ...
	I0401 21:07:54.783048   72096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 21:07:54.799920   72096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 21:07:54.817125   72096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 21:07:54.954170   72096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 21:07:55.099520   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 21:07:55.125853   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 21:07:55.147184   72096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 21:07:55.147253   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.158166   72096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 21:07:55.158264   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.169739   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.180580   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.192009   72096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 21:07:55.202863   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.213770   72096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.232492   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.243279   72096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 21:07:55.252819   72096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 21:07:55.252890   72096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 21:07:55.266009   72096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 21:07:55.276185   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:55.393356   72096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 21:07:55.494039   72096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 21:07:55.494118   72096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 21:07:55.499309   72096 start.go:563] Will wait 60s for crictl version
	I0401 21:07:55.499366   72096 ssh_runner.go:195] Run: which crictl
	I0401 21:07:55.503928   72096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 21:07:55.551590   72096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 21:07:55.551671   72096 ssh_runner.go:195] Run: crio --version
	I0401 21:07:55.584117   72096 ssh_runner.go:195] Run: crio --version
	I0401 21:07:55.615306   72096 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 21:07:55.616535   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:55.619254   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:55.619608   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:55.619636   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:55.619847   72096 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 21:07:55.624474   72096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:55.638014   72096 kubeadm.go:883] updating cluster {Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-f
lannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 21:07:55.638113   72096 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:55.638154   72096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:55.671768   72096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 21:07:55.671841   72096 ssh_runner.go:195] Run: which lz4
	I0401 21:07:55.675956   72096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 21:07:55.680087   72096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 21:07:55.680112   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0401 21:07:52.964723   70627 addons.go:514] duration metric: took 1.188041211s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:07:54.369067   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:07:56.867804   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:07:57.258849   72096 crio.go:462] duration metric: took 1.582927832s to copy over tarball
	I0401 21:07:57.258910   72096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 21:07:59.713811   72096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.454879542s)
	I0401 21:07:59.713834   72096 crio.go:469] duration metric: took 2.454960019s to extract the tarball
	I0401 21:07:59.713841   72096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 21:07:59.754131   72096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:59.803175   72096 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 21:07:59.803203   72096 cache_images.go:84] Images are preloaded, skipping loading
	I0401 21:07:59.803211   72096 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.32.2 crio true true} ...
	I0401 21:07:59.803435   72096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-269490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0401 21:07:59.803542   72096 ssh_runner.go:195] Run: crio config
	I0401 21:07:59.859211   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:07:59.859254   72096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 21:07:59.859279   72096 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-269490 NodeName:custom-flannel-269490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 21:07:59.859420   72096 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-269490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.115"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 21:07:59.859485   72096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 21:07:59.872776   72096 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 21:07:59.872854   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 21:07:59.885208   72096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0401 21:07:59.906315   72096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 21:07:59.925314   72096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2301 bytes)
	I0401 21:07:59.945350   72096 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0401 21:07:59.949720   72096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:59.963662   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:08:00.089313   72096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:08:00.110067   72096 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490 for IP: 192.168.39.115
	I0401 21:08:00.110106   72096 certs.go:194] generating shared ca certs ...
	I0401 21:08:00.110120   72096 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.110294   72096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 21:08:00.110353   72096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 21:08:00.110366   72096 certs.go:256] generating profile certs ...
	I0401 21:08:00.110447   72096 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key
	I0401 21:08:00.110464   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt with IP's: []
	I0401 21:08:00.467453   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt ...
	I0401 21:08:00.467488   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: {Name:mk5fce7bdfd13ea831b9ad59ba060161e466fba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.467673   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key ...
	I0401 21:08:00.467686   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key: {Name:mkd84c13916801a689354e72412e009ab37dbcc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.467762   72096 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe
	I0401 21:08:00.467777   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.115]
	I0401 21:08:00.590635   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe ...
	I0401 21:08:00.590669   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe: {Name:mkda99eea5992b7c522818c8e4285bad25863233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.590826   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe ...
	I0401 21:08:00.590839   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe: {Name:mk9b0cf3137043b92f3b27be430ec53f12f6344f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.590912   72096 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt
	I0401 21:08:00.590994   72096 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key
	I0401 21:08:00.591062   72096 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key
	I0401 21:08:00.591077   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt with IP's: []
	I0401 21:08:00.940635   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt ...
	I0401 21:08:00.940673   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt: {Name:mked6a267559570093b231c1df683bf03eedde35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.940870   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key ...
	I0401 21:08:00.940890   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key: {Name:mke111681e05b7c77b9764da674c41796facd6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.941091   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 21:08:00.941141   72096 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 21:08:00.941157   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 21:08:00.941192   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 21:08:00.941230   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 21:08:00.941263   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 21:08:00.941317   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:08:00.941848   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 21:08:01.021801   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 21:08:01.047883   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 21:08:01.076127   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 21:08:01.101880   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 21:08:01.128066   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 21:08:01.155676   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 21:08:01.181194   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 21:08:01.208023   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 21:08:01.235447   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 21:08:01.263882   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 21:08:01.291788   72096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 21:08:01.311432   72096 ssh_runner.go:195] Run: openssl version
	I0401 21:08:01.317827   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 21:08:01.330054   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.335156   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.335215   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.341534   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 21:08:01.353100   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 21:08:01.364974   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.370126   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.370182   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.376077   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 21:08:01.387280   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 21:08:01.398763   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.403624   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.403672   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.409399   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 21:08:01.421319   72096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 21:08:01.426390   72096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 21:08:01.426469   72096 kubeadm.go:392] StartCluster: {Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-flan
nel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:08:01.426539   72096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 21:08:01.426621   72096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:08:01.483618   72096 cri.go:89] found id: ""
	I0401 21:08:01.483709   72096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 21:08:01.497458   72096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:08:01.510064   72096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:08:01.525097   72096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:08:01.525126   72096 kubeadm.go:157] found existing configuration files:
	
	I0401 21:08:01.525187   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:08:01.538475   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:08:01.538537   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:08:01.549865   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:08:01.564435   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:08:01.564512   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:08:01.577112   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:08:01.588654   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:08:01.588723   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:08:01.600399   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:08:01.611302   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:08:01.611382   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:08:01.626795   72096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:08:01.706166   72096 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 21:08:01.706290   72096 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:08:01.816483   72096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:08:01.816607   72096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:08:01.816718   72096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 21:08:01.826517   72096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:07:59.368327   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:01.867707   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:01.944921   72096 out.go:235]   - Generating certificates and keys ...
	I0401 21:08:01.945033   72096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:08:01.945102   72096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:08:01.997637   72096 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 21:08:02.082193   72096 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 21:08:02.370051   72096 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 21:08:02.610131   72096 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 21:08:02.813327   72096 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 21:08:02.813505   72096 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-269490 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0401 21:08:02.959340   72096 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 21:08:02.959508   72096 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-269490 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0401 21:08:03.064528   72096 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 21:08:03.321464   72096 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 21:08:03.362989   72096 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 21:08:03.363077   72096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:08:03.478482   72096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:08:03.742329   72096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 21:08:03.877782   72096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:08:04.064813   72096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:08:04.137063   72096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:08:04.137482   72096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:08:04.141208   72096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:08:04.143036   72096 out.go:235]   - Booting up control plane ...
	I0401 21:08:04.143157   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:08:04.144620   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:08:04.145423   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:08:04.172192   72096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:08:04.183885   72096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:08:04.183985   72096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:08:04.340951   72096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 21:08:04.341118   72096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 21:08:04.842463   72096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.673213ms
	I0401 21:08:04.842565   72096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 21:08:03.867783   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:05.868899   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:10.848073   72096 kubeadm.go:310] [api-check] The API server is healthy after 6.003303805s
	I0401 21:08:10.859890   72096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 21:08:10.875896   72096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 21:08:10.906682   72096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 21:08:10.906981   72096 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-269490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 21:08:10.931670   72096 kubeadm.go:310] [bootstrap-token] Using token: y1rxzx.ol9rd2e05i88tezo
	I0401 21:08:07.870418   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:09.374853   70627 node_ready.go:49] node "kindnet-269490" has status "Ready":"True"
	I0401 21:08:09.374880   70627 node_ready.go:38] duration metric: took 17.010513164s for node "kindnet-269490" to be "Ready" ...
	I0401 21:08:09.374892   70627 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:09.378622   70627 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.383841   70627 pod_ready.go:93] pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.383869   70627 pod_ready.go:82] duration metric: took 1.005212656s for pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.383881   70627 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.388202   70627 pod_ready.go:93] pod "etcd-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.388230   70627 pod_ready.go:82] duration metric: took 4.341416ms for pod "etcd-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.388246   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.393029   70627 pod_ready.go:93] pod "kube-apiserver-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.393061   70627 pod_ready.go:82] duration metric: took 4.797935ms for pod "kube-apiserver-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.393076   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.397690   70627 pod_ready.go:93] pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.397711   70627 pod_ready.go:82] duration metric: took 4.626561ms for pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.397722   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-b5cp4" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.570047   70627 pod_ready.go:93] pod "kube-proxy-b5cp4" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.570070   70627 pod_ready.go:82] duration metric: took 172.341286ms for pod "kube-proxy-b5cp4" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.570080   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.969135   70627 pod_ready.go:93] pod "kube-scheduler-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.969167   70627 pod_ready.go:82] duration metric: took 399.078827ms for pod "kube-scheduler-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.969182   70627 pod_ready.go:39] duration metric: took 1.594272558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:10.969200   70627 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:08:10.969260   70627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:08:10.986425   70627 api_server.go:72] duration metric: took 19.20965796s to wait for apiserver process to appear ...
	I0401 21:08:10.986449   70627 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:08:10.986476   70627 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0401 21:08:10.991890   70627 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0401 21:08:10.993199   70627 api_server.go:141] control plane version: v1.32.2
	I0401 21:08:10.993221   70627 api_server.go:131] duration metric: took 6.765166ms to wait for apiserver health ...
	I0401 21:08:10.993228   70627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:08:11.169752   70627 system_pods.go:59] 8 kube-system pods found
	I0401 21:08:11.169784   70627 system_pods.go:61] "coredns-668d6bf9bc-fqk9t" [1aa997a2-044b-4f1e-bd5f-eb88acdcd380] Running
	I0401 21:08:11.169789   70627 system_pods.go:61] "etcd-kindnet-269490" [6eb8dc71-efc6-40e9-89db-6947499e653f] Running
	I0401 21:08:11.169793   70627 system_pods.go:61] "kindnet-nqt4k" [77a8572e-36d9-4789-a305-c00c892b67ea] Running
	I0401 21:08:11.169796   70627 system_pods.go:61] "kube-apiserver-kindnet-269490" [35601d6b-2485-45ff-b906-80cd3d73bb50] Running
	I0401 21:08:11.169800   70627 system_pods.go:61] "kube-controller-manager-kindnet-269490" [75f07631-fab7-404a-b309-4ea7d2af791e] Running
	I0401 21:08:11.169803   70627 system_pods.go:61] "kube-proxy-b5cp4" [dce5a6b6-9133-4a63-b683-ffbe803e9481] Running
	I0401 21:08:11.169806   70627 system_pods.go:61] "kube-scheduler-kindnet-269490" [313714c7-ef0d-4991-b38e-7ea5d1815849] Running
	I0401 21:08:11.169808   70627 system_pods.go:61] "storage-provisioner" [39894cc3-b618-4ee1-8a46-7065c914830c] Running
	I0401 21:08:11.169816   70627 system_pods.go:74] duration metric: took 176.581209ms to wait for pod list to return data ...
	I0401 21:08:11.169825   70627 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:08:11.370607   70627 default_sa.go:45] found service account: "default"
	I0401 21:08:11.370635   70627 default_sa.go:55] duration metric: took 200.803332ms for default service account to be created ...
	I0401 21:08:11.370646   70627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:08:11.570070   70627 system_pods.go:86] 8 kube-system pods found
	I0401 21:08:11.570099   70627 system_pods.go:89] "coredns-668d6bf9bc-fqk9t" [1aa997a2-044b-4f1e-bd5f-eb88acdcd380] Running
	I0401 21:08:11.570105   70627 system_pods.go:89] "etcd-kindnet-269490" [6eb8dc71-efc6-40e9-89db-6947499e653f] Running
	I0401 21:08:11.570109   70627 system_pods.go:89] "kindnet-nqt4k" [77a8572e-36d9-4789-a305-c00c892b67ea] Running
	I0401 21:08:11.570112   70627 system_pods.go:89] "kube-apiserver-kindnet-269490" [35601d6b-2485-45ff-b906-80cd3d73bb50] Running
	I0401 21:08:11.570116   70627 system_pods.go:89] "kube-controller-manager-kindnet-269490" [75f07631-fab7-404a-b309-4ea7d2af791e] Running
	I0401 21:08:11.570118   70627 system_pods.go:89] "kube-proxy-b5cp4" [dce5a6b6-9133-4a63-b683-ffbe803e9481] Running
	I0401 21:08:11.570122   70627 system_pods.go:89] "kube-scheduler-kindnet-269490" [313714c7-ef0d-4991-b38e-7ea5d1815849] Running
	I0401 21:08:11.570125   70627 system_pods.go:89] "storage-provisioner" [39894cc3-b618-4ee1-8a46-7065c914830c] Running
	I0401 21:08:11.570132   70627 system_pods.go:126] duration metric: took 199.479575ms to wait for k8s-apps to be running ...
	I0401 21:08:11.570138   70627 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:08:11.570180   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:08:11.587544   70627 system_svc.go:56] duration metric: took 17.395489ms WaitForService to wait for kubelet
	I0401 21:08:11.587581   70627 kubeadm.go:582] duration metric: took 19.810818504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:08:11.587624   70627 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:08:11.769946   70627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:08:11.769972   70627 node_conditions.go:123] node cpu capacity is 2
	I0401 21:08:11.769983   70627 node_conditions.go:105] duration metric: took 182.353276ms to run NodePressure ...
	I0401 21:08:11.769993   70627 start.go:241] waiting for startup goroutines ...
	I0401 21:08:11.770001   70627 start.go:246] waiting for cluster config update ...
	I0401 21:08:11.770014   70627 start.go:255] writing updated cluster config ...
	I0401 21:08:11.770327   70627 ssh_runner.go:195] Run: rm -f paused
	I0401 21:08:11.821228   70627 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:08:11.823026   70627 out.go:177] * Done! kubectl is now configured to use "kindnet-269490" cluster and "default" namespace by default
	I0401 21:08:10.933219   72096 out.go:235]   - Configuring RBAC rules ...
	I0401 21:08:10.933389   72096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 21:08:10.953572   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 21:08:10.970295   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 21:08:10.974769   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 21:08:10.978152   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 21:08:10.982485   72096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 21:08:11.255128   72096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 21:08:11.700130   72096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 21:08:12.254377   72096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 21:08:12.254408   72096 kubeadm.go:310] 
	I0401 21:08:12.254498   72096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 21:08:12.254529   72096 kubeadm.go:310] 
	I0401 21:08:12.254681   72096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 21:08:12.254700   72096 kubeadm.go:310] 
	I0401 21:08:12.254729   72096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 21:08:12.254812   72096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 21:08:12.254885   72096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 21:08:12.254895   72096 kubeadm.go:310] 
	I0401 21:08:12.254989   72096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 21:08:12.254999   72096 kubeadm.go:310] 
	I0401 21:08:12.255069   72096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 21:08:12.255078   72096 kubeadm.go:310] 
	I0401 21:08:12.255148   72096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 21:08:12.255261   72096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 21:08:12.255357   72096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 21:08:12.255368   72096 kubeadm.go:310] 
	I0401 21:08:12.255483   72096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 21:08:12.255610   72096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 21:08:12.255631   72096 kubeadm.go:310] 
	I0401 21:08:12.255741   72096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y1rxzx.ol9rd2e05i88tezo \
	I0401 21:08:12.255881   72096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 \
	I0401 21:08:12.255916   72096 kubeadm.go:310] 	--control-plane 
	I0401 21:08:12.255926   72096 kubeadm.go:310] 
	I0401 21:08:12.256021   72096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 21:08:12.256030   72096 kubeadm.go:310] 
	I0401 21:08:12.256150   72096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y1rxzx.ol9rd2e05i88tezo \
	I0401 21:08:12.256298   72096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 
	I0401 21:08:12.257066   72096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:08:12.257093   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:08:12.259236   72096 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0401 21:08:12.260686   72096 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 21:08:12.260745   72096 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0401 21:08:12.267034   72096 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0401 21:08:12.267068   72096 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0401 21:08:12.296900   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 21:08:12.848752   72096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 21:08:12.848860   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:12.848947   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-269490 minikube.k8s.io/updated_at=2025_04_01T21_08_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=custom-flannel-269490 minikube.k8s.io/primary=true
	I0401 21:08:12.877431   72096 ops.go:34] apiserver oom_adj: -16
	I0401 21:08:12.985187   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:13.485414   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:13.985981   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:14.485489   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:14.985825   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:15.485827   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:15.985754   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:16.148701   72096 kubeadm.go:1113] duration metric: took 3.299903142s to wait for elevateKubeSystemPrivileges
	I0401 21:08:16.148749   72096 kubeadm.go:394] duration metric: took 14.722285454s to StartCluster
	I0401 21:08:16.148769   72096 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:16.148863   72096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:08:16.150194   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:16.150504   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 21:08:16.150507   72096 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:08:16.150594   72096 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 21:08:16.150716   72096 config.go:182] Loaded profile config "custom-flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:08:16.150735   72096 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-269490"
	I0401 21:08:16.150760   72096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-269490"
	I0401 21:08:16.150715   72096 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-269490"
	I0401 21:08:16.150863   72096 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-269490"
	I0401 21:08:16.150890   72096 host.go:66] Checking if "custom-flannel-269490" exists ...
	I0401 21:08:16.151250   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.151283   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.151250   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.151392   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.152288   72096 out.go:177] * Verifying Kubernetes components...
	I0401 21:08:16.153941   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:08:16.167829   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0401 21:08:16.167856   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0401 21:08:16.168243   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.168391   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.168828   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.168843   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.168868   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.168884   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.169237   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.169245   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.169517   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.169824   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.169861   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.172742   72096 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-269490"
	I0401 21:08:16.172773   72096 host.go:66] Checking if "custom-flannel-269490" exists ...
	I0401 21:08:16.172999   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.173021   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.187721   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0401 21:08:16.188253   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.188750   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.188774   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.189282   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.189445   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.189724   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0401 21:08:16.190201   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.190710   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.190728   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.191093   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.191453   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:08:16.191654   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.191689   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.192999   72096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:08:16.194424   72096 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:08:16.194442   72096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 21:08:16.194461   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:08:16.197511   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.198005   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:08:16.198041   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.198238   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:08:16.198409   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:08:16.198748   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:08:16.198918   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:08:16.207703   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0401 21:08:16.208135   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.208589   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.208612   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.209006   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.209189   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.211107   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:08:16.211344   72096 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 21:08:16.211365   72096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 21:08:16.211385   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:08:16.213813   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.214123   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:08:16.214151   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.214296   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:08:16.214499   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:08:16.214910   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:08:16.215227   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:08:16.590199   72096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:08:16.590208   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 21:08:16.643763   72096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 21:08:16.713804   72096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:08:17.209943   72096 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 21:08:17.210084   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.210105   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.210495   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.210517   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.210528   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.210536   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.210760   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.210776   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.211295   72096 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-269490" to be "Ready" ...
	I0401 21:08:17.251129   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.251163   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.251515   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.251537   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.251546   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.513636   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.513660   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.515627   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.515656   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.515670   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.515670   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.515679   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.515935   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.515951   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.515959   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.517748   72096 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 21:08:17.519567   72096 addons.go:514] duration metric: took 1.36897309s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:08:17.714019   72096 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-269490" context rescaled to 1 replicas
	I0401 21:08:19.214809   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:21.214845   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:23.715394   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:25.752337   72096 node_ready.go:49] node "custom-flannel-269490" has status "Ready":"True"
	I0401 21:08:25.752361   72096 node_ready.go:38] duration metric: took 8.541004401s for node "custom-flannel-269490" to be "Ready" ...
	I0401 21:08:25.752373   72096 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:25.781711   72096 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:27.788318   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:29.789254   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:32.287111   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:34.287266   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:36.288139   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:38.788164   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:39.288278   72096 pod_ready.go:93] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.288311   72096 pod_ready.go:82] duration metric: took 13.506568961s for pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.288323   72096 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.293894   72096 pod_ready.go:93] pod "etcd-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.293914   72096 pod_ready.go:82] duration metric: took 5.583334ms for pod "etcd-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.293922   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.299231   72096 pod_ready.go:93] pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.299260   72096 pod_ready.go:82] duration metric: took 5.329864ms for pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.299273   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.303589   72096 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.303611   72096 pod_ready.go:82] duration metric: took 4.329184ms for pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.303626   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7mfxw" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.307588   72096 pod_ready.go:93] pod "kube-proxy-7mfxw" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.307608   72096 pod_ready.go:82] duration metric: took 3.974955ms for pod "kube-proxy-7mfxw" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.307619   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.686205   72096 pod_ready.go:93] pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.686262   72096 pod_ready.go:82] duration metric: took 378.634734ms for pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.686278   72096 pod_ready.go:39] duration metric: took 13.933890743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:39.686295   72096 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:08:39.686354   72096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:08:39.707371   72096 api_server.go:72] duration metric: took 23.556833358s to wait for apiserver process to appear ...
	I0401 21:08:39.707408   72096 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:08:39.707430   72096 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0401 21:08:39.712196   72096 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0401 21:08:39.713175   72096 api_server.go:141] control plane version: v1.32.2
	I0401 21:08:39.713206   72096 api_server.go:131] duration metric: took 5.790036ms to wait for apiserver health ...
	I0401 21:08:39.713216   72096 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:08:39.887726   72096 system_pods.go:59] 7 kube-system pods found
	I0401 21:08:39.887756   72096 system_pods.go:61] "coredns-668d6bf9bc-5mj4j" [36eeaf01-f8b5-4b27-a127-3e8e6fb6fe55] Running
	I0401 21:08:39.887763   72096 system_pods.go:61] "etcd-custom-flannel-269490" [13ff3a81-1ab8-47ea-9773-5d96ece48b42] Running
	I0401 21:08:39.887768   72096 system_pods.go:61] "kube-apiserver-custom-flannel-269490" [6593f2ea-974b-4d95-89ea-5231ae3f8f9a] Running
	I0401 21:08:39.887773   72096 system_pods.go:61] "kube-controller-manager-custom-flannel-269490" [badd65c7-6a1d-4ac6-8e2b-81b0523d520a] Running
	I0401 21:08:39.887777   72096 system_pods.go:61] "kube-proxy-7mfxw" [1b07ba12-0e06-432e-b1ef-6712daa0aceb] Running
	I0401 21:08:39.887786   72096 system_pods.go:61] "kube-scheduler-custom-flannel-269490" [c28fe18b-4d5e-481c-9f77-897e84bdc147] Running
	I0401 21:08:39.887791   72096 system_pods.go:61] "storage-provisioner" [23315522-a502-4852-98ec-9589e819d09c] Running
	I0401 21:08:39.887799   72096 system_pods.go:74] duration metric: took 174.575758ms to wait for pod list to return data ...
	I0401 21:08:39.887809   72096 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:08:40.086898   72096 default_sa.go:45] found service account: "default"
	I0401 21:08:40.086922   72096 default_sa.go:55] duration metric: took 199.10767ms for default service account to be created ...
	I0401 21:08:40.086932   72096 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:08:40.287384   72096 system_pods.go:86] 7 kube-system pods found
	I0401 21:08:40.287416   72096 system_pods.go:89] "coredns-668d6bf9bc-5mj4j" [36eeaf01-f8b5-4b27-a127-3e8e6fb6fe55] Running
	I0401 21:08:40.287421   72096 system_pods.go:89] "etcd-custom-flannel-269490" [13ff3a81-1ab8-47ea-9773-5d96ece48b42] Running
	I0401 21:08:40.287425   72096 system_pods.go:89] "kube-apiserver-custom-flannel-269490" [6593f2ea-974b-4d95-89ea-5231ae3f8f9a] Running
	I0401 21:08:40.287429   72096 system_pods.go:89] "kube-controller-manager-custom-flannel-269490" [badd65c7-6a1d-4ac6-8e2b-81b0523d520a] Running
	I0401 21:08:40.287432   72096 system_pods.go:89] "kube-proxy-7mfxw" [1b07ba12-0e06-432e-b1ef-6712daa0aceb] Running
	I0401 21:08:40.287435   72096 system_pods.go:89] "kube-scheduler-custom-flannel-269490" [c28fe18b-4d5e-481c-9f77-897e84bdc147] Running
	I0401 21:08:40.287438   72096 system_pods.go:89] "storage-provisioner" [23315522-a502-4852-98ec-9589e819d09c] Running
	I0401 21:08:40.287443   72096 system_pods.go:126] duration metric: took 200.50653ms to wait for k8s-apps to be running ...
	I0401 21:08:40.287450   72096 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:08:40.287503   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:08:40.303609   72096 system_svc.go:56] duration metric: took 16.150777ms WaitForService to wait for kubelet
	I0401 21:08:40.303639   72096 kubeadm.go:582] duration metric: took 24.153106492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:08:40.303665   72096 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:08:40.486884   72096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:08:40.486919   72096 node_conditions.go:123] node cpu capacity is 2
	I0401 21:08:40.486933   72096 node_conditions.go:105] duration metric: took 183.261884ms to run NodePressure ...
	I0401 21:08:40.486946   72096 start.go:241] waiting for startup goroutines ...
	I0401 21:08:40.486955   72096 start.go:246] waiting for cluster config update ...
	I0401 21:08:40.486969   72096 start.go:255] writing updated cluster config ...
	I0401 21:08:40.487283   72096 ssh_runner.go:195] Run: rm -f paused
	I0401 21:08:40.546242   72096 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:08:40.548286   72096 out.go:177] * Done! kubectl is now configured to use "custom-flannel-269490" cluster and "default" namespace by default
	I0401 21:08:44.694071   61496 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 21:08:44.694235   61496 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 21:08:44.695734   61496 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 21:08:44.695829   61496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:08:44.695942   61496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:08:44.696082   61496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:08:44.696333   61496 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 21:08:44.696433   61496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:08:44.698422   61496 out.go:235]   - Generating certificates and keys ...
	I0401 21:08:44.698535   61496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:08:44.698622   61496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:08:44.698707   61496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 21:08:44.698782   61496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0401 21:08:44.698848   61496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 21:08:44.698894   61496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0401 21:08:44.698952   61496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0401 21:08:44.699004   61496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0401 21:08:44.699067   61496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 21:08:44.699131   61496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 21:08:44.699164   61496 kubeadm.go:310] [certs] Using the existing "sa" key
	I0401 21:08:44.699213   61496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:08:44.699257   61496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:08:44.699302   61496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:08:44.699360   61496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:08:44.699410   61496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:08:44.699518   61496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:08:44.699595   61496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:08:44.699630   61496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:08:44.699705   61496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:08:44.701085   61496 out.go:235]   - Booting up control plane ...
	I0401 21:08:44.701182   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:08:44.701269   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:08:44.701370   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:08:44.701492   61496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:08:44.701663   61496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 21:08:44.701710   61496 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 21:08:44.701768   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.701969   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702033   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702244   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702341   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702570   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702639   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702818   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702922   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.703238   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.703248   61496 kubeadm.go:310] 
	I0401 21:08:44.703300   61496 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 21:08:44.703339   61496 kubeadm.go:310] 		timed out waiting for the condition
	I0401 21:08:44.703347   61496 kubeadm.go:310] 
	I0401 21:08:44.703393   61496 kubeadm.go:310] 	This error is likely caused by:
	I0401 21:08:44.703424   61496 kubeadm.go:310] 		- The kubelet is not running
	I0401 21:08:44.703575   61496 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 21:08:44.703594   61496 kubeadm.go:310] 
	I0401 21:08:44.703747   61496 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 21:08:44.703797   61496 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 21:08:44.703843   61496 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 21:08:44.703851   61496 kubeadm.go:310] 
	I0401 21:08:44.703979   61496 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 21:08:44.704106   61496 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 21:08:44.704117   61496 kubeadm.go:310] 
	I0401 21:08:44.704223   61496 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 21:08:44.704338   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 21:08:44.704400   61496 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 21:08:44.704458   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 21:08:44.704515   61496 kubeadm.go:394] duration metric: took 8m1.369559682s to StartCluster
	I0401 21:08:44.704550   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:08:44.704601   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:08:44.704607   61496 kubeadm.go:310] 
	I0401 21:08:44.776607   61496 cri.go:89] found id: ""
	I0401 21:08:44.776631   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.776638   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:08:44.776643   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:08:44.776688   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:08:44.822697   61496 cri.go:89] found id: ""
	I0401 21:08:44.822724   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.822732   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:08:44.822737   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:08:44.822789   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:08:44.870855   61496 cri.go:89] found id: ""
	I0401 21:08:44.870884   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.870895   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:08:44.870903   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:08:44.870963   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:08:44.909983   61496 cri.go:89] found id: ""
	I0401 21:08:44.910010   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.910019   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:08:44.910025   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:08:44.910205   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:08:44.947636   61496 cri.go:89] found id: ""
	I0401 21:08:44.947667   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.947677   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:08:44.947684   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:08:44.947742   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:08:44.987225   61496 cri.go:89] found id: ""
	I0401 21:08:44.987254   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.987265   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:08:44.987273   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:08:44.987328   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:08:45.031455   61496 cri.go:89] found id: ""
	I0401 21:08:45.031483   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.031493   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:08:45.031498   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:08:45.031556   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:08:45.073545   61496 cri.go:89] found id: ""
	I0401 21:08:45.073572   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.073582   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:08:45.073593   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:08:45.073604   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:08:45.139059   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:08:45.139110   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:08:45.156271   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:08:45.156309   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:08:45.239654   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:08:45.239682   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:08:45.239697   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:08:45.355473   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:08:45.355501   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0401 21:08:45.401208   61496 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 21:08:45.401255   61496 out.go:270] * 
	W0401 21:08:45.401306   61496 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.401323   61496 out.go:270] * 
	W0401 21:08:45.402124   61496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 21:08:45.405265   61496 out.go:201] 
	W0401 21:08:45.406413   61496 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.406448   61496 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 21:08:45.406470   61496 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 21:08:45.407866   61496 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.413801668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743541726413780877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f4d6976-94da-495f-bc12-26ec1ad3ffcd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.414444656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a5dc422-1a6c-4f6d-88c3-28675246573f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.414498272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a5dc422-1a6c-4f6d-88c3-28675246573f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.414529335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4a5dc422-1a6c-4f6d-88c3-28675246573f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.447363732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=355faa58-3a7d-453b-945e-8781ca012a6a name=/runtime.v1.RuntimeService/Version
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.447451390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=355faa58-3a7d-453b-945e-8781ca012a6a name=/runtime.v1.RuntimeService/Version
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.448626728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7f15d3d-f363-4d8f-aee9-b7b23d45ef75 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.449057281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743541726449034300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7f15d3d-f363-4d8f-aee9-b7b23d45ef75 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.449856562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7301462d-324a-4fa1-b757-434511fc7bd5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.449920691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7301462d-324a-4fa1-b757-434511fc7bd5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.449994669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7301462d-324a-4fa1-b757-434511fc7bd5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.481178437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90e4cf7c-8b78-4822-be40-c6c744439fea name=/runtime.v1.RuntimeService/Version
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.481245428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90e4cf7c-8b78-4822-be40-c6c744439fea name=/runtime.v1.RuntimeService/Version
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.482771768Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de6ef58c-8228-41ae-b93f-0a2edba5a46f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.483212627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743541726483179545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de6ef58c-8228-41ae-b93f-0a2edba5a46f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.483707531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f85a584-0018-48c8-96db-5269bbe9903c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.483776019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f85a584-0018-48c8-96db-5269bbe9903c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.483808803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6f85a584-0018-48c8-96db-5269bbe9903c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.522641152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1d4c17b-46f8-4e33-acb9-a06c9fac6bab name=/runtime.v1.RuntimeService/Version
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.522751601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1d4c17b-46f8-4e33-acb9-a06c9fac6bab name=/runtime.v1.RuntimeService/Version
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.524522251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=205392e4-30b1-46bd-a562-26a3f867d0b0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.524878297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743541726524857678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=205392e4-30b1-46bd-a562-26a3f867d0b0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.525592729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15d5040b-95f9-40c6-8c5f-f8c04eca9c0c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.525641403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15d5040b-95f9-40c6-8c5f-f8c04eca9c0c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:08:46 old-k8s-version-582207 crio[644]: time="2025-04-01 21:08:46.525678006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=15d5040b-95f9-40c6-8c5f-f8c04eca9c0c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 21:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054135] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041531] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.204738] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.959861] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.661664] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.677904] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068300] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079515] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.190777] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.171995] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.258506] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.231600] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.068848] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.731705] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[ +11.880365] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 1 21:04] systemd-fstab-generator[5005]: Ignoring "noauto" option for root device
	[Apr 1 21:06] systemd-fstab-generator[5279]: Ignoring "noauto" option for root device
	[  +0.075307] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:08:46 up 8 min,  0 users,  load average: 0.00, 0.09, 0.06
	Linux old-k8s-version-582207 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000bdd950)
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: goroutine 148 [select]:
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a77ef0, 0x4f0ac20, 0xc000d00410, 0x1, 0xc0001020c0)
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00025d260, 0xc0001020c0)
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bc3e40, 0xc000bf4f60)
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5445]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 01 21:08:41 old-k8s-version-582207 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 01 21:08:41 old-k8s-version-582207 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 01 21:08:41 old-k8s-version-582207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19.
	Apr 01 21:08:41 old-k8s-version-582207 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 01 21:08:41 old-k8s-version-582207 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5455]: I0401 21:08:41.876734    5455 server.go:416] Version: v1.20.0
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5455]: I0401 21:08:41.877029    5455 server.go:837] Client rotation is on, will bootstrap in background
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5455]: I0401 21:08:41.879238    5455 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5455]: W0401 21:08:41.880557    5455 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 01 21:08:41 old-k8s-version-582207 kubelet[5455]: I0401 21:08:41.880751    5455 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (229.779541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-582207" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (513.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:08:50.869544   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:09:06.657633   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:09:23.389259   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:24.181416   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:24.187810   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:24.199191   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:24.220620   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:24.262144   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:24.343679   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:24.505292   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:24.827262   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:25.469462   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:26.751465   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:29.312884   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:34.434447   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:44.676712   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:54.980948   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:54.987388   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:54.998782   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:55.020282   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:55.061775   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:55.143080   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:55.304641   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:55.626569   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:10:55.863217   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:56.267925   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:10:57.549503   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:00.111445   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:05.158083   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:05.233626   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:15.475688   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:23.566520   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:34.669214   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:34.675583   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:34.686912   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:34.708294   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:34.749716   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:34.831223   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:34.993083   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:35.314698   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:35.956381   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:35.957570   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:37.238161   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:39.529786   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:39.800465   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:44.922343   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:46.120208   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:53.407612   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:53.414079   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:53.425629   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:53.447125   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:53.488612   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:53.570109   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:53.731741   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:54.053480   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:54.694847   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:55.163928   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:11:55.976554   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:11:58.538575   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:03.660833   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:07.231434   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:13.902252   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:15.646080   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:16.918977   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:27.799509   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:34.384606   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:55.437091   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:55.443487   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:55.454910   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:55.476366   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:55.517798   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:55.599323   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:55.760954   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:56.082754   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:56.607640   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:12:56.724380   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:12:58.006191   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:00.567643   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:05.689472   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:08.042371   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:11.844042   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:11.850486   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:11.861873   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:11.883370   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:11.924876   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:12.006324   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:12.167677   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:12.489432   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:13.131267   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:14.413416   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:15.346408   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:15.931052   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:16.974772   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:22.096089   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:32.338361   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:36.412417   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:38.840569   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:41.070081   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:41.077273   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:41.088664   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:41.110087   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:41.151612   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:41.233098   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:41.394720   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:13:41.716477   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:42.358450   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:43.639799   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:46.201362   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:51.323247   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:13:52.820040   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:14:01.564601   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:14:06.657481   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:14:17.374559   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:14:18.529231   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:14:22.046096   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:14:33.781382   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:14:37.267888   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:15:03.008037   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:15:24.181147   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:15:39.296722   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:15:51.884347   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:15:54.980753   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:15:55.702737   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:15:55.863276   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:16:22.682096   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:16:24.930333   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:16:34.668548   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:16:39.530850   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:16:53.407104   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:17:02.371110   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:17:09.731958   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:17:21.109959   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:17:27.798830   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (231.157372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-582207" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (233.439288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-582207 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo cat                    | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo cat                    | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo cat                    | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 21:07:20
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 21:07:20.892475   72096 out.go:345] Setting OutFile to fd 1 ...
	I0401 21:07:20.892577   72096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:07:20.892588   72096 out.go:358] Setting ErrFile to fd 2...
	I0401 21:07:20.892592   72096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:07:20.892779   72096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 21:07:20.893387   72096 out.go:352] Setting JSON to false
	I0401 21:07:20.894914   72096 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6585,"bootTime":1743535056,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 21:07:20.895074   72096 start.go:139] virtualization: kvm guest
	I0401 21:07:20.896928   72096 out.go:177] * [custom-flannel-269490] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 21:07:20.898151   72096 notify.go:220] Checking for updates...
	I0401 21:07:20.898184   72096 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 21:07:20.899289   72096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 21:07:20.900337   72096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:07:20.901554   72096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:20.902784   72096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 21:07:20.903866   72096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 21:07:20.905447   72096 config.go:182] Loaded profile config "calico-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:20.905560   72096 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:20.905643   72096 config.go:182] Loaded profile config "old-k8s-version-582207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 21:07:20.905706   72096 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 21:07:20.945212   72096 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 21:07:20.946413   72096 start.go:297] selected driver: kvm2
	I0401 21:07:20.946434   72096 start.go:901] validating driver "kvm2" against <nil>
	I0401 21:07:20.946446   72096 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 21:07:20.947178   72096 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:07:20.947262   72096 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 21:07:20.963919   72096 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 21:07:20.963985   72096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 21:07:20.964232   72096 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:07:20.964268   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:07:20.964285   72096 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0401 21:07:20.964365   72096 start.go:340] cluster config:
	{Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:07:20.964523   72096 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:07:20.966047   72096 out.go:177] * Starting "custom-flannel-269490" primary control-plane node in "custom-flannel-269490" cluster
	I0401 21:07:18.476294   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:18.476788   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find current IP address of domain kindnet-269490 in network mk-kindnet-269490
	I0401 21:07:18.476808   70627 main.go:141] libmachine: (kindnet-269490) DBG | I0401 21:07:18.476765   70649 retry.go:31] will retry after 3.122657647s: waiting for domain to come up
	I0401 21:07:21.603058   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:21.603568   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find current IP address of domain kindnet-269490 in network mk-kindnet-269490
	I0401 21:07:21.603587   70627 main.go:141] libmachine: (kindnet-269490) DBG | I0401 21:07:21.603538   70649 retry.go:31] will retry after 5.429623003s: waiting for domain to come up
	I0401 21:07:19.747355   68904 addons.go:514] duration metric: took 1.377747901s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:07:19.754995   68904 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-269490" context rescaled to 1 replicas
	I0401 21:07:21.254062   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:23.254170   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:20.967052   72096 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:20.967100   72096 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 21:07:20.967109   72096 cache.go:56] Caching tarball of preloaded images
	I0401 21:07:20.967208   72096 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 21:07:20.967221   72096 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 21:07:20.967324   72096 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json ...
	I0401 21:07:20.967350   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json: {Name:mkabbd5fa26c3d0a0e3ad8206cce24911ddf4ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:20.967473   72096 start.go:360] acquireMachinesLock for custom-flannel-269490: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 21:07:27.036122   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.036704   70627 main.go:141] libmachine: (kindnet-269490) found domain IP: 192.168.72.200
	I0401 21:07:27.036728   70627 main.go:141] libmachine: (kindnet-269490) reserving static IP address...
	I0401 21:07:27.036741   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has current primary IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.037062   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find host DHCP lease matching {name: "kindnet-269490", mac: "52:54:00:a7:37:c0", ip: "192.168.72.200"} in network mk-kindnet-269490
	I0401 21:07:27.112813   70627 main.go:141] libmachine: (kindnet-269490) DBG | Getting to WaitForSSH function...
	I0401 21:07:27.112843   70627 main.go:141] libmachine: (kindnet-269490) reserved static IP address 192.168.72.200 for domain kindnet-269490
	I0401 21:07:27.112872   70627 main.go:141] libmachine: (kindnet-269490) waiting for SSH...
	I0401 21:07:27.115323   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.115796   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.115923   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.115950   70627 main.go:141] libmachine: (kindnet-269490) DBG | Using SSH client type: external
	I0401 21:07:27.115972   70627 main.go:141] libmachine: (kindnet-269490) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa (-rw-------)
	I0401 21:07:27.115994   70627 main.go:141] libmachine: (kindnet-269490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 21:07:27.116012   70627 main.go:141] libmachine: (kindnet-269490) DBG | About to run SSH command:
	I0401 21:07:27.116025   70627 main.go:141] libmachine: (kindnet-269490) DBG | exit 0
	I0401 21:07:28.675412   72096 start.go:364] duration metric: took 7.707851568s to acquireMachinesLock for "custom-flannel-269490"
	I0401 21:07:28.675471   72096 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:07:28.675590   72096 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 21:07:25.472985   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:27.253847   68904 node_ready.go:49] node "calico-269490" has status "Ready":"True"
	I0401 21:07:27.253864   68904 node_ready.go:38] duration metric: took 8.003199629s for node "calico-269490" to be "Ready" ...
	I0401 21:07:27.253872   68904 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:07:27.257050   68904 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:27.242376   70627 main.go:141] libmachine: (kindnet-269490) DBG | SSH cmd err, output: <nil>: 
	I0401 21:07:27.242647   70627 main.go:141] libmachine: (kindnet-269490) KVM machine creation complete
	I0401 21:07:27.242954   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetConfigRaw
	I0401 21:07:27.243418   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:27.243604   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:27.243762   70627 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 21:07:27.243775   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:27.245022   70627 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 21:07:27.245035   70627 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 21:07:27.245039   70627 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 21:07:27.245044   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.247141   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.247552   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.247576   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.247767   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.247943   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.248079   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.248204   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.248336   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.248568   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.248579   70627 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 21:07:27.345624   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:27.345651   70627 main.go:141] libmachine: Detecting the provisioner...
	I0401 21:07:27.345668   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.348762   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.349156   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.349177   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.349442   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.349668   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.349845   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.349977   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.350143   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.350384   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.350397   70627 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 21:07:27.455197   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 21:07:27.455275   70627 main.go:141] libmachine: found compatible host: buildroot
	I0401 21:07:27.455286   70627 main.go:141] libmachine: Provisioning with buildroot...
	I0401 21:07:27.455296   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.455573   70627 buildroot.go:166] provisioning hostname "kindnet-269490"
	I0401 21:07:27.455600   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.455807   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.458178   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.458482   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.458501   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.458727   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.458935   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.459090   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.459252   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.459383   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.459600   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.459612   70627 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-269490 && echo "kindnet-269490" | sudo tee /etc/hostname
	I0401 21:07:27.580784   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-269490
	
	I0401 21:07:27.580810   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.583963   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.584471   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.584501   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.584766   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.584991   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.585193   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.585384   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.585564   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.585756   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.585773   70627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-269490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-269490/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-269490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 21:07:27.700731   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:27.700756   70627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 21:07:27.700776   70627 buildroot.go:174] setting up certificates
	I0401 21:07:27.700789   70627 provision.go:84] configureAuth start
	I0401 21:07:27.700807   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.701088   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:27.703973   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.704286   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.704299   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.704491   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.706703   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.707051   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.707076   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.707203   70627 provision.go:143] copyHostCerts
	I0401 21:07:27.707255   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 21:07:27.707265   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 21:07:27.707328   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 21:07:27.707422   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 21:07:27.707429   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 21:07:27.707453   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 21:07:27.707515   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 21:07:27.707522   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 21:07:27.707542   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 21:07:27.707603   70627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.kindnet-269490 san=[127.0.0.1 192.168.72.200 kindnet-269490 localhost minikube]
	I0401 21:07:28.041214   70627 provision.go:177] copyRemoteCerts
	I0401 21:07:28.041272   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 21:07:28.041293   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.044440   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.044786   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.044818   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.044953   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.045179   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.045341   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.045494   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.125273   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 21:07:28.152183   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0401 21:07:28.177383   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 21:07:28.201496   70627 provision.go:87] duration metric: took 500.692247ms to configureAuth
	I0401 21:07:28.201523   70627 buildroot.go:189] setting minikube options for container-runtime
	I0401 21:07:28.201720   70627 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:28.201828   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.204278   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.204623   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.204647   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.204776   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.204980   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.205160   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.205299   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.205448   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:28.205669   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:28.205689   70627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 21:07:28.439140   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 21:07:28.439165   70627 main.go:141] libmachine: Checking connection to Docker...
	I0401 21:07:28.439173   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetURL
	I0401 21:07:28.440485   70627 main.go:141] libmachine: (kindnet-269490) DBG | using libvirt version 6000000
	I0401 21:07:28.442490   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.442845   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.442873   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.443006   70627 main.go:141] libmachine: Docker is up and running!
	I0401 21:07:28.443020   70627 main.go:141] libmachine: Reticulating splines...
	I0401 21:07:28.443027   70627 client.go:171] duration metric: took 26.224912939s to LocalClient.Create
	I0401 21:07:28.443053   70627 start.go:167] duration metric: took 26.224971636s to libmachine.API.Create "kindnet-269490"
	I0401 21:07:28.443076   70627 start.go:293] postStartSetup for "kindnet-269490" (driver="kvm2")
	I0401 21:07:28.443090   70627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 21:07:28.443111   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.443340   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 21:07:28.443361   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.445496   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.445781   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.445819   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.445938   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.446110   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.446250   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.446380   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.527257   70627 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 21:07:28.531876   70627 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 21:07:28.531913   70627 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 21:07:28.531976   70627 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 21:07:28.532079   70627 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 21:07:28.532200   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 21:07:28.542758   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:28.567116   70627 start.go:296] duration metric: took 124.023387ms for postStartSetup
	I0401 21:07:28.567157   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetConfigRaw
	I0401 21:07:28.567744   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:28.570513   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.570890   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.570925   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.571188   70627 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/config.json ...
	I0401 21:07:28.571352   70627 start.go:128] duration metric: took 26.372666304s to createHost
	I0401 21:07:28.571372   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.573625   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.573965   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.573996   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.574106   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.574359   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.574499   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.574645   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.574805   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:28.574999   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:28.575009   70627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 21:07:28.675218   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743541648.630648618
	
	I0401 21:07:28.675244   70627 fix.go:216] guest clock: 1743541648.630648618
	I0401 21:07:28.675251   70627 fix.go:229] Guest: 2025-04-01 21:07:28.630648618 +0000 UTC Remote: 2025-04-01 21:07:28.571362914 +0000 UTC m=+26.497421115 (delta=59.285704ms)
	I0401 21:07:28.675268   70627 fix.go:200] guest clock delta is within tolerance: 59.285704ms
	I0401 21:07:28.675273   70627 start.go:83] releasing machines lock for "kindnet-269490", held for 26.476652376s
	I0401 21:07:28.675294   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.675584   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:28.678529   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.678972   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.679003   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.679129   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679598   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679812   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679913   70627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 21:07:28.679970   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.680010   70627 ssh_runner.go:195] Run: cat /version.json
	I0401 21:07:28.680030   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.682720   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683101   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.683138   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683163   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683249   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.683417   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.683501   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.683531   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683603   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.683739   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.683788   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.683896   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.684046   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.684172   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.769030   70627 ssh_runner.go:195] Run: systemctl --version
	I0401 21:07:28.791882   70627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 21:07:28.961201   70627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 21:07:28.969654   70627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 21:07:28.969728   70627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 21:07:28.986375   70627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 21:07:28.986411   70627 start.go:495] detecting cgroup driver to use...
	I0401 21:07:28.986468   70627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 21:07:29.003118   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 21:07:29.017954   70627 docker.go:217] disabling cri-docker service (if available) ...
	I0401 21:07:29.018024   70627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 21:07:29.039725   70627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 21:07:29.056555   70627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 21:07:29.182669   70627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 21:07:29.336854   70627 docker.go:233] disabling docker service ...
	I0401 21:07:29.336911   70627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 21:07:29.354124   70627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 21:07:29.368340   70627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 21:07:29.535858   70627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 21:07:29.694425   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 21:07:29.713503   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 21:07:29.735749   70627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 21:07:29.735818   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.747810   70627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 21:07:29.747881   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.759913   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.777285   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.793765   70627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 21:07:29.806511   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.821740   70627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.845322   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.860990   70627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 21:07:29.874670   70627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 21:07:29.874736   70627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 21:07:29.893635   70627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 21:07:29.908790   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:30.038485   70627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 21:07:30.156804   70627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 21:07:30.156877   70627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 21:07:30.163177   70627 start.go:563] Will wait 60s for crictl version
	I0401 21:07:30.163270   70627 ssh_runner.go:195] Run: which crictl
	I0401 21:07:30.167977   70627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 21:07:30.229882   70627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 21:07:30.229963   70627 ssh_runner.go:195] Run: crio --version
	I0401 21:07:30.269347   70627 ssh_runner.go:195] Run: crio --version
	I0401 21:07:30.302624   70627 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 21:07:28.677559   72096 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0401 21:07:28.677751   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:28.677822   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:28.694049   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0401 21:07:28.694546   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:28.695167   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:07:28.695195   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:28.695565   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:28.695779   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:28.695920   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:28.696100   72096 start.go:159] libmachine.API.Create for "custom-flannel-269490" (driver="kvm2")
	I0401 21:07:28.696127   72096 client.go:168] LocalClient.Create starting
	I0401 21:07:28.696164   72096 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 21:07:28.696199   72096 main.go:141] libmachine: Decoding PEM data...
	I0401 21:07:28.696217   72096 main.go:141] libmachine: Parsing certificate...
	I0401 21:07:28.696268   72096 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 21:07:28.696301   72096 main.go:141] libmachine: Decoding PEM data...
	I0401 21:07:28.696318   72096 main.go:141] libmachine: Parsing certificate...
	I0401 21:07:28.696344   72096 main.go:141] libmachine: Running pre-create checks...
	I0401 21:07:28.696357   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .PreCreateCheck
	I0401 21:07:28.696663   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:28.697088   72096 main.go:141] libmachine: Creating machine...
	I0401 21:07:28.697104   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Create
	I0401 21:07:28.697278   72096 main.go:141] libmachine: (custom-flannel-269490) creating KVM machine...
	I0401 21:07:28.697294   72096 main.go:141] libmachine: (custom-flannel-269490) creating network...
	I0401 21:07:28.698499   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found existing default KVM network
	I0401 21:07:28.699714   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:28.699559   72184 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201380}
	I0401 21:07:28.699734   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | created network xml: 
	I0401 21:07:28.699747   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | <network>
	I0401 21:07:28.699756   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <name>mk-custom-flannel-269490</name>
	I0401 21:07:28.699772   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <dns enable='no'/>
	I0401 21:07:28.699783   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   
	I0401 21:07:28.699791   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 21:07:28.699801   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |     <dhcp>
	I0401 21:07:28.699814   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 21:07:28.699824   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |     </dhcp>
	I0401 21:07:28.699834   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   </ip>
	I0401 21:07:28.699842   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   
	I0401 21:07:28.699856   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | </network>
	I0401 21:07:28.699866   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | 
	I0401 21:07:28.705387   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | trying to create private KVM network mk-custom-flannel-269490 192.168.39.0/24...
	I0401 21:07:28.781748   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | private KVM network mk-custom-flannel-269490 192.168.39.0/24 created
	I0401 21:07:28.781785   72096 main.go:141] libmachine: (custom-flannel-269490) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 ...
	I0401 21:07:28.781803   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:28.781711   72184 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:28.781825   72096 main.go:141] libmachine: (custom-flannel-269490) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 21:07:28.781872   72096 main.go:141] libmachine: (custom-flannel-269490) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 21:07:29.058600   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.058491   72184 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa...
	I0401 21:07:29.284720   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.284560   72184 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/custom-flannel-269490.rawdisk...
	I0401 21:07:29.284762   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Writing magic tar header
	I0401 21:07:29.284781   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Writing SSH key tar header
	I0401 21:07:29.284790   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.284674   72184 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 ...
	I0401 21:07:29.284799   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490
	I0401 21:07:29.284806   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 21:07:29.284819   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:29.284829   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 21:07:29.284854   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 21:07:29.284877   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 (perms=drwx------)
	I0401 21:07:29.284897   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins
	I0401 21:07:29.284911   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home
	I0401 21:07:29.284916   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | skipping /home - not owner
	I0401 21:07:29.284927   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 21:07:29.284936   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 21:07:29.284947   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 21:07:29.284953   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 21:07:29.284961   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 21:07:29.284970   72096 main.go:141] libmachine: (custom-flannel-269490) creating domain...
	I0401 21:07:29.285984   72096 main.go:141] libmachine: (custom-flannel-269490) define libvirt domain using xml: 
	I0401 21:07:29.286030   72096 main.go:141] libmachine: (custom-flannel-269490) <domain type='kvm'>
	I0401 21:07:29.286042   72096 main.go:141] libmachine: (custom-flannel-269490)   <name>custom-flannel-269490</name>
	I0401 21:07:29.286047   72096 main.go:141] libmachine: (custom-flannel-269490)   <memory unit='MiB'>3072</memory>
	I0401 21:07:29.286087   72096 main.go:141] libmachine: (custom-flannel-269490)   <vcpu>2</vcpu>
	I0401 21:07:29.286134   72096 main.go:141] libmachine: (custom-flannel-269490)   <features>
	I0401 21:07:29.286149   72096 main.go:141] libmachine: (custom-flannel-269490)     <acpi/>
	I0401 21:07:29.286155   72096 main.go:141] libmachine: (custom-flannel-269490)     <apic/>
	I0401 21:07:29.286176   72096 main.go:141] libmachine: (custom-flannel-269490)     <pae/>
	I0401 21:07:29.286193   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286204   72096 main.go:141] libmachine: (custom-flannel-269490)   </features>
	I0401 21:07:29.286232   72096 main.go:141] libmachine: (custom-flannel-269490)   <cpu mode='host-passthrough'>
	I0401 21:07:29.286253   72096 main.go:141] libmachine: (custom-flannel-269490)   
	I0401 21:07:29.286262   72096 main.go:141] libmachine: (custom-flannel-269490)   </cpu>
	I0401 21:07:29.286271   72096 main.go:141] libmachine: (custom-flannel-269490)   <os>
	I0401 21:07:29.286281   72096 main.go:141] libmachine: (custom-flannel-269490)     <type>hvm</type>
	I0401 21:07:29.286291   72096 main.go:141] libmachine: (custom-flannel-269490)     <boot dev='cdrom'/>
	I0401 21:07:29.286299   72096 main.go:141] libmachine: (custom-flannel-269490)     <boot dev='hd'/>
	I0401 21:07:29.286309   72096 main.go:141] libmachine: (custom-flannel-269490)     <bootmenu enable='no'/>
	I0401 21:07:29.286318   72096 main.go:141] libmachine: (custom-flannel-269490)   </os>
	I0401 21:07:29.286327   72096 main.go:141] libmachine: (custom-flannel-269490)   <devices>
	I0401 21:07:29.286336   72096 main.go:141] libmachine: (custom-flannel-269490)     <disk type='file' device='cdrom'>
	I0401 21:07:29.286354   72096 main.go:141] libmachine: (custom-flannel-269490)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/boot2docker.iso'/>
	I0401 21:07:29.286364   72096 main.go:141] libmachine: (custom-flannel-269490)       <target dev='hdc' bus='scsi'/>
	I0401 21:07:29.286374   72096 main.go:141] libmachine: (custom-flannel-269490)       <readonly/>
	I0401 21:07:29.286383   72096 main.go:141] libmachine: (custom-flannel-269490)     </disk>
	I0401 21:07:29.286393   72096 main.go:141] libmachine: (custom-flannel-269490)     <disk type='file' device='disk'>
	I0401 21:07:29.286403   72096 main.go:141] libmachine: (custom-flannel-269490)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 21:07:29.286417   72096 main.go:141] libmachine: (custom-flannel-269490)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/custom-flannel-269490.rawdisk'/>
	I0401 21:07:29.286425   72096 main.go:141] libmachine: (custom-flannel-269490)       <target dev='hda' bus='virtio'/>
	I0401 21:07:29.286439   72096 main.go:141] libmachine: (custom-flannel-269490)     </disk>
	I0401 21:07:29.286454   72096 main.go:141] libmachine: (custom-flannel-269490)     <interface type='network'>
	I0401 21:07:29.286466   72096 main.go:141] libmachine: (custom-flannel-269490)       <source network='mk-custom-flannel-269490'/>
	I0401 21:07:29.286478   72096 main.go:141] libmachine: (custom-flannel-269490)       <model type='virtio'/>
	I0401 21:07:29.286488   72096 main.go:141] libmachine: (custom-flannel-269490)     </interface>
	I0401 21:07:29.286497   72096 main.go:141] libmachine: (custom-flannel-269490)     <interface type='network'>
	I0401 21:07:29.286504   72096 main.go:141] libmachine: (custom-flannel-269490)       <source network='default'/>
	I0401 21:07:29.286528   72096 main.go:141] libmachine: (custom-flannel-269490)       <model type='virtio'/>
	I0401 21:07:29.286549   72096 main.go:141] libmachine: (custom-flannel-269490)     </interface>
	I0401 21:07:29.286563   72096 main.go:141] libmachine: (custom-flannel-269490)     <serial type='pty'>
	I0401 21:07:29.286573   72096 main.go:141] libmachine: (custom-flannel-269490)       <target port='0'/>
	I0401 21:07:29.286581   72096 main.go:141] libmachine: (custom-flannel-269490)     </serial>
	I0401 21:07:29.286603   72096 main.go:141] libmachine: (custom-flannel-269490)     <console type='pty'>
	I0401 21:07:29.286615   72096 main.go:141] libmachine: (custom-flannel-269490)       <target type='serial' port='0'/>
	I0401 21:07:29.286628   72096 main.go:141] libmachine: (custom-flannel-269490)     </console>
	I0401 21:07:29.286640   72096 main.go:141] libmachine: (custom-flannel-269490)     <rng model='virtio'>
	I0401 21:07:29.286652   72096 main.go:141] libmachine: (custom-flannel-269490)       <backend model='random'>/dev/random</backend>
	I0401 21:07:29.286663   72096 main.go:141] libmachine: (custom-flannel-269490)     </rng>
	I0401 21:07:29.286669   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286680   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286686   72096 main.go:141] libmachine: (custom-flannel-269490)   </devices>
	I0401 21:07:29.286706   72096 main.go:141] libmachine: (custom-flannel-269490) </domain>
	I0401 21:07:29.286723   72096 main.go:141] libmachine: (custom-flannel-269490) 
	I0401 21:07:29.290865   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:8b:9d:ef in network default
	I0401 21:07:29.291399   72096 main.go:141] libmachine: (custom-flannel-269490) starting domain...
	I0401 21:07:29.291422   72096 main.go:141] libmachine: (custom-flannel-269490) ensuring networks are active...
	I0401 21:07:29.291433   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:29.291982   72096 main.go:141] libmachine: (custom-flannel-269490) Ensuring network default is active
	I0401 21:07:29.292311   72096 main.go:141] libmachine: (custom-flannel-269490) Ensuring network mk-custom-flannel-269490 is active
	I0401 21:07:29.292850   72096 main.go:141] libmachine: (custom-flannel-269490) getting domain XML...
	I0401 21:07:29.293579   72096 main.go:141] libmachine: (custom-flannel-269490) creating domain...
	I0401 21:07:30.303928   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:30.307187   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:30.307572   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:30.307599   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:30.307851   70627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 21:07:30.312717   70627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:30.329656   70627 kubeadm.go:883] updating cluster {Name:kindnet-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 21:07:30.329769   70627 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:30.329840   70627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:30.373808   70627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 21:07:30.373892   70627 ssh_runner.go:195] Run: which lz4
	I0401 21:07:30.379933   70627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 21:07:30.385901   70627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 21:07:30.385939   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0401 21:07:32.049587   70627 crio.go:462] duration metric: took 1.669696993s to copy over tarball
	I0401 21:07:32.049659   70627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 21:07:29.263832   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:31.264708   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:33.769708   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:30.943467   72096 main.go:141] libmachine: (custom-flannel-269490) waiting for IP...
	I0401 21:07:30.944501   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:30.945048   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:30.945154   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:30.945061   72184 retry.go:31] will retry after 194.088864ms: waiting for domain to come up
	I0401 21:07:31.141228   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.142003   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.142032   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.141987   72184 retry.go:31] will retry after 322.526555ms: waiting for domain to come up
	I0401 21:07:31.466493   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.467103   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.467136   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.467085   72184 retry.go:31] will retry after 362.158292ms: waiting for domain to come up
	I0401 21:07:31.830645   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.831272   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.831294   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.831181   72184 retry.go:31] will retry after 507.010873ms: waiting for domain to come up
	I0401 21:07:32.340049   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:32.340688   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:32.340721   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:32.340672   72184 retry.go:31] will retry after 549.764239ms: waiting for domain to come up
	I0401 21:07:32.892498   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:32.893048   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:32.893109   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:32.893038   72184 retry.go:31] will retry after 893.566953ms: waiting for domain to come up
	I0401 21:07:33.788648   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:33.789231   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:33.789313   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:33.789217   72184 retry.go:31] will retry after 1.073160889s: waiting for domain to come up
	I0401 21:07:34.863948   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:34.864715   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:34.864744   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:34.864686   72184 retry.go:31] will retry after 1.137676024s: waiting for domain to come up
	I0401 21:07:34.855116   70627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.805424084s)
	I0401 21:07:34.855163   70627 crio.go:469] duration metric: took 2.805546758s to extract the tarball
	I0401 21:07:34.855174   70627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 21:07:34.908880   70627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:34.967377   70627 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 21:07:34.967406   70627 cache_images.go:84] Images are preloaded, skipping loading
	I0401 21:07:34.967416   70627 kubeadm.go:934] updating node { 192.168.72.200 8443 v1.32.2 crio true true} ...
	I0401 21:07:34.967548   70627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-269490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0401 21:07:34.967631   70627 ssh_runner.go:195] Run: crio config
	I0401 21:07:35.020670   70627 cni.go:84] Creating CNI manager for "kindnet"
	I0401 21:07:35.020696   70627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 21:07:35.020718   70627 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-269490 NodeName:kindnet-269490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 21:07:35.020839   70627 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-269490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 21:07:35.020907   70627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 21:07:35.030866   70627 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 21:07:35.030991   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 21:07:35.040113   70627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0401 21:07:35.058011   70627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 21:07:35.078574   70627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0401 21:07:35.098427   70627 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0401 21:07:35.103690   70627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:35.120443   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:35.277665   70627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:07:35.301275   70627 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490 for IP: 192.168.72.200
	I0401 21:07:35.301301   70627 certs.go:194] generating shared ca certs ...
	I0401 21:07:35.301323   70627 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:35.301486   70627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 21:07:35.301544   70627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 21:07:35.301556   70627 certs.go:256] generating profile certs ...
	I0401 21:07:35.301622   70627 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key
	I0401 21:07:35.301645   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt with IP's: []
	I0401 21:07:36.000768   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt ...
	I0401 21:07:36.000802   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: {Name:mk04a99f27c2f056a29fa36354c47c3222966cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.001003   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key ...
	I0401 21:07:36.001020   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key: {Name:mk5444fb90b1ff0a0c80a111598fb1ccc67e25fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.001135   70627 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5
	I0401 21:07:36.001155   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0401 21:07:36.090552   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 ...
	I0401 21:07:36.090588   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5: {Name:mk69f7dd622b7c419828c04f6ea380483c101940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.090767   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5 ...
	I0401 21:07:36.090785   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5: {Name:mkeaf32ff9453aef850a761332e7f9bb6dfc5cad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.090885   70627 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt
	I0401 21:07:36.090977   70627 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key
	I0401 21:07:36.091055   70627 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key
	I0401 21:07:36.091075   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt with IP's: []
	I0401 21:07:36.356603   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt ...
	I0401 21:07:36.356633   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt: {Name:mk053c71ff066a03a7f917f8347cef707651c156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.356813   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key ...
	I0401 21:07:36.356831   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key: {Name:mk7c401e3c137a1d374bd407e8454dc99cff1e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.357017   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 21:07:36.357068   70627 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 21:07:36.357083   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 21:07:36.357115   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 21:07:36.357170   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 21:07:36.357210   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 21:07:36.357269   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:36.357829   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 21:07:36.391336   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 21:07:36.425083   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 21:07:36.457892   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 21:07:36.492019   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 21:07:36.522365   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 21:07:36.547296   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 21:07:36.572536   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 21:07:36.598460   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 21:07:36.628401   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 21:07:36.658521   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 21:07:36.689061   70627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 21:07:36.714997   70627 ssh_runner.go:195] Run: openssl version
	I0401 21:07:36.723421   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 21:07:36.739419   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.745825   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.745888   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.754721   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 21:07:36.771512   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 21:07:36.789799   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.796727   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.796800   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.810295   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 21:07:36.824556   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 21:07:36.839972   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.847132   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.847202   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.854129   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 21:07:36.868264   70627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 21:07:36.873005   70627 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 21:07:36.873058   70627 kubeadm.go:392] StartCluster: {Name:kindnet-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:07:36.873147   70627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 21:07:36.873204   70627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:07:36.917357   70627 cri.go:89] found id: ""
	I0401 21:07:36.917434   70627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 21:07:36.928432   70627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:07:36.939322   70627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:07:36.949948   70627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:07:36.949975   70627 kubeadm.go:157] found existing configuration files:
	
	I0401 21:07:36.950027   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:07:36.959903   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:07:36.959979   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:07:36.970704   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:07:36.980434   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:07:36.980531   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:07:36.994176   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:07:37.007180   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:07:37.007238   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:07:37.017875   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:07:37.028242   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:07:37.028303   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:07:37.038869   70627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:07:37.095127   70627 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 21:07:37.095194   70627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:07:37.220077   70627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:07:37.220198   70627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:07:37.220346   70627 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 21:07:37.232593   70627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:07:38.460012   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:37.363938   70627 out.go:235]   - Generating certificates and keys ...
	I0401 21:07:37.364091   70627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:07:37.364186   70627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:07:37.410466   70627 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 21:07:37.746651   70627 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 21:07:38.065662   70627 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 21:07:38.284383   70627 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 21:07:38.672088   70627 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 21:07:38.672441   70627 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-269490 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0401 21:07:39.029897   70627 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 21:07:39.030235   70627 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-269490 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0401 21:07:39.197549   70627 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 21:07:39.291766   70627 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 21:07:39.461667   70627 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 21:07:39.461915   70627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:07:39.598656   70627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:07:39.836507   70627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 21:07:40.087046   70627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:07:40.167057   70627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:07:40.493658   70627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:07:40.494572   70627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:07:40.497003   70627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:07:36.004129   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:36.004736   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:36.004770   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:36.004691   72184 retry.go:31] will retry after 1.398747795s: waiting for domain to come up
	I0401 21:07:37.404982   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:37.405521   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:37.405562   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:37.405494   72184 retry.go:31] will retry after 1.806073182s: waiting for domain to come up
	I0401 21:07:39.213342   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:39.213908   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:39.213933   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:39.213880   72184 retry.go:31] will retry after 2.187010311s: waiting for domain to come up
	I0401 21:07:40.498949   70627 out.go:235]   - Booting up control plane ...
	I0401 21:07:40.499089   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:07:40.500823   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:07:40.502736   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:07:40.520810   70627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:07:40.529515   70627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:07:40.529647   70627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:07:40.738046   70627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 21:07:40.738253   70627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 21:07:41.738936   70627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001264949s
	I0401 21:07:41.739064   70627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 21:07:40.766840   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:42.802475   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:41.402690   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:41.403302   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:41.403328   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:41.403246   72184 retry.go:31] will retry after 2.956512585s: waiting for domain to come up
	I0401 21:07:44.361436   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:44.362043   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:44.362067   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:44.362014   72184 retry.go:31] will retry after 3.563399146s: waiting for domain to come up
	I0401 21:07:47.241056   70627 kubeadm.go:310] [api-check] The API server is healthy after 5.503493954s
	I0401 21:07:47.253704   70627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 21:07:47.270641   70627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 21:07:47.300541   70627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 21:07:47.300816   70627 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-269490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 21:07:47.320561   70627 kubeadm.go:310] [bootstrap-token] Using token: xu4lw3.orewvhbjfn5oas79
	I0401 21:07:47.322197   70627 out.go:235]   - Configuring RBAC rules ...
	I0401 21:07:47.322340   70627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 21:07:47.327157   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 21:07:47.334751   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 21:07:47.338556   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 21:07:47.342546   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 21:07:47.349586   70627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 21:07:47.650929   70627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 21:07:48.074376   70627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 21:07:48.652551   70627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 21:07:48.652572   70627 kubeadm.go:310] 
	I0401 21:07:48.652631   70627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 21:07:48.652637   70627 kubeadm.go:310] 
	I0401 21:07:48.652746   70627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 21:07:48.652757   70627 kubeadm.go:310] 
	I0401 21:07:48.652792   70627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 21:07:48.652887   70627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 21:07:48.652979   70627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 21:07:48.652989   70627 kubeadm.go:310] 
	I0401 21:07:48.653048   70627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 21:07:48.653063   70627 kubeadm.go:310] 
	I0401 21:07:48.653137   70627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 21:07:48.653146   70627 kubeadm.go:310] 
	I0401 21:07:48.653225   70627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 21:07:48.653312   70627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 21:07:48.653407   70627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 21:07:48.653421   70627 kubeadm.go:310] 
	I0401 21:07:48.653547   70627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 21:07:48.653624   70627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 21:07:48.653630   70627 kubeadm.go:310] 
	I0401 21:07:48.653714   70627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xu4lw3.orewvhbjfn5oas79 \
	I0401 21:07:48.653861   70627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 \
	I0401 21:07:48.653901   70627 kubeadm.go:310] 	--control-plane 
	I0401 21:07:48.653911   70627 kubeadm.go:310] 
	I0401 21:07:48.653996   70627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 21:07:48.654008   70627 kubeadm.go:310] 
	I0401 21:07:48.654074   70627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xu4lw3.orewvhbjfn5oas79 \
	I0401 21:07:48.654207   70627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 
	I0401 21:07:48.654854   70627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:07:48.654879   70627 cni.go:84] Creating CNI manager for "kindnet"
	I0401 21:07:48.656486   70627 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 21:07:45.262936   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:46.762301   68904 pod_ready.go:93] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:46.762325   68904 pod_ready.go:82] duration metric: took 19.505245826s for pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:46.762339   68904 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-8lpnw" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:48.770589   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:47.927525   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:47.928071   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:47.928097   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:47.928026   72184 retry.go:31] will retry after 4.622496999s: waiting for domain to come up
	I0401 21:07:48.657874   70627 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 21:07:48.663855   70627 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 21:07:48.663882   70627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 21:07:48.684916   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 21:07:48.983530   70627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 21:07:48.983634   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:48.983651   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-269490 minikube.k8s.io/updated_at=2025_04_01T21_07_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=kindnet-269490 minikube.k8s.io/primary=true
	I0401 21:07:49.169988   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:49.170019   70627 ops.go:34] apiserver oom_adj: -16
	I0401 21:07:49.670692   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:50.170288   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:50.670668   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.170790   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.670642   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.775301   70627 kubeadm.go:1113] duration metric: took 2.791727937s to wait for elevateKubeSystemPrivileges
	I0401 21:07:51.775340   70627 kubeadm.go:394] duration metric: took 14.902284629s to StartCluster
	I0401 21:07:51.775359   70627 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:51.775433   70627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:07:51.776414   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:51.776667   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 21:07:51.776684   70627 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 21:07:51.776663   70627 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:07:51.776751   70627 addons.go:69] Setting storage-provisioner=true in profile "kindnet-269490"
	I0401 21:07:51.776767   70627 addons.go:238] Setting addon storage-provisioner=true in "kindnet-269490"
	I0401 21:07:51.776791   70627 host.go:66] Checking if "kindnet-269490" exists ...
	I0401 21:07:51.776801   70627 addons.go:69] Setting default-storageclass=true in profile "kindnet-269490"
	I0401 21:07:51.776821   70627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-269490"
	I0401 21:07:51.776876   70627 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:51.777230   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.777253   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.777275   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.777285   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.779535   70627 out.go:177] * Verifying Kubernetes components...
	I0401 21:07:51.780894   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:51.792573   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38081
	I0401 21:07:51.792618   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0401 21:07:51.793016   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.793065   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.793480   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.793504   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.793657   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.793680   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.794003   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.794035   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.794177   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.794522   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.794562   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.797467   70627 addons.go:238] Setting addon default-storageclass=true in "kindnet-269490"
	I0401 21:07:51.797509   70627 host.go:66] Checking if "kindnet-269490" exists ...
	I0401 21:07:51.797754   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.797788   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.812436   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0401 21:07:51.812455   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0401 21:07:51.812907   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.812960   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.813461   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.813479   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.813561   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.813576   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.813844   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.813927   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.813972   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.814617   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.814659   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.815559   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:51.818041   70627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:07:51.819387   70627 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:07:51.819404   70627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 21:07:51.819419   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:51.822051   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.822524   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:51.822549   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.822659   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:51.822828   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:51.822959   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:51.823080   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:51.830521   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44007
	I0401 21:07:51.830922   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.831277   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.831300   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.831604   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.831734   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.833172   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:51.833423   70627 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 21:07:51.833437   70627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 21:07:51.833452   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:51.835920   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.836208   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:51.836233   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.836310   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:51.836491   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:51.836611   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:51.836740   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:51.962702   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 21:07:51.987403   70627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:07:52.104000   70627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 21:07:52.189058   70627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:07:52.363088   70627 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0401 21:07:52.364340   70627 node_ready.go:35] waiting up to 15m0s for node "kindnet-269490" to be "Ready" ...
	I0401 21:07:52.440087   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.440110   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.440411   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.440428   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.440442   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.440451   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.440672   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.440687   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.440718   70627 main.go:141] libmachine: (kindnet-269490) DBG | Closing plugin on server side
	I0401 21:07:52.497451   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.497484   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.497812   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.497831   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.884016   70627 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-269490" context rescaled to 1 replicas
	I0401 21:07:52.961084   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.961107   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.961382   70627 main.go:141] libmachine: (kindnet-269490) DBG | Closing plugin on server side
	I0401 21:07:52.961424   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.961437   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.961455   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.961466   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.961684   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.961700   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.963918   70627 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 21:07:51.268101   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:53.269079   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:53.768113   68904 pod_ready.go:93] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.768135   68904 pod_ready.go:82] duration metric: took 7.005790357s for pod "calico-node-8lpnw" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.768143   68904 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.772369   68904 pod_ready.go:93] pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.772394   68904 pod_ready.go:82] duration metric: took 4.243794ms for pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.772406   68904 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.777208   68904 pod_ready.go:93] pod "etcd-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.777228   68904 pod_ready.go:82] duration metric: took 4.815519ms for pod "etcd-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.777237   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.780965   68904 pod_ready.go:93] pod "kube-apiserver-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.780986   68904 pod_ready.go:82] duration metric: took 3.742662ms for pod "kube-apiserver-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.780997   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.785450   68904 pod_ready.go:93] pod "kube-controller-manager-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.785473   68904 pod_ready.go:82] duration metric: took 4.467871ms for pod "kube-controller-manager-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.785484   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-clkkm" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.166123   68904 pod_ready.go:93] pod "kube-proxy-clkkm" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:54.166149   68904 pod_ready.go:82] duration metric: took 380.656026ms for pod "kube-proxy-clkkm" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.166161   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.567079   68904 pod_ready.go:93] pod "kube-scheduler-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:54.567105   68904 pod_ready.go:82] duration metric: took 400.93599ms for pod "kube-scheduler-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.567118   68904 pod_ready.go:39] duration metric: took 27.313232071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:07:54.567135   68904 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:07:54.567190   68904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:07:54.583839   68904 api_server.go:72] duration metric: took 36.214254974s to wait for apiserver process to appear ...
	I0401 21:07:54.583866   68904 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:07:54.583887   68904 api_server.go:253] Checking apiserver healthz at https://192.168.61.102:8443/healthz ...
	I0401 21:07:54.588495   68904 api_server.go:279] https://192.168.61.102:8443/healthz returned 200:
	ok
	I0401 21:07:54.589645   68904 api_server.go:141] control plane version: v1.32.2
	I0401 21:07:54.589671   68904 api_server.go:131] duration metric: took 5.795827ms to wait for apiserver health ...
	I0401 21:07:54.589681   68904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:07:54.767449   68904 system_pods.go:59] 9 kube-system pods found
	I0401 21:07:54.767492   68904 system_pods.go:61] "calico-kube-controllers-77969b7d87-64swg" [34a618ff-c7cd-447e-9ef9-32357bcf9e42] Running
	I0401 21:07:54.767499   68904 system_pods.go:61] "calico-node-8lpnw" [75dee764-9af1-4f9d-8248-8f333c9b3a75] Running
	I0401 21:07:54.767503   68904 system_pods.go:61] "coredns-668d6bf9bc-mn944" [fb12f605-c79b-4cdf-92c3-2a3bf9319b9f] Running
	I0401 21:07:54.767507   68904 system_pods.go:61] "etcd-calico-269490" [60128f13-ff1b-43d3-9577-30032cbc1224] Running
	I0401 21:07:54.767510   68904 system_pods.go:61] "kube-apiserver-calico-269490" [7bc4e2df-17c3-4c16-8fc4-6cbd4d194757] Running
	I0401 21:07:54.767513   68904 system_pods.go:61] "kube-controller-manager-calico-269490" [bada65a1-db90-4fe8-b3da-f55647a2a5f5] Running
	I0401 21:07:54.767516   68904 system_pods.go:61] "kube-proxy-clkkm" [20def08e-d6ad-4685-91cf-658019584c13] Running
	I0401 21:07:54.767519   68904 system_pods.go:61] "kube-scheduler-calico-269490" [02f99ab0-d476-4e0a-b12b-b62d8fded21c] Running
	I0401 21:07:54.767522   68904 system_pods.go:61] "storage-provisioner" [dea0b01b-b565-4ea8-b740-28125b3c579c] Running
	I0401 21:07:54.767528   68904 system_pods.go:74] duration metric: took 177.841641ms to wait for pod list to return data ...
	I0401 21:07:54.767537   68904 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:07:54.967440   68904 default_sa.go:45] found service account: "default"
	I0401 21:07:54.967473   68904 default_sa.go:55] duration metric: took 199.928997ms for default service account to be created ...
	I0401 21:07:54.967485   68904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:07:55.168431   68904 system_pods.go:86] 9 kube-system pods found
	I0401 21:07:55.168456   68904 system_pods.go:89] "calico-kube-controllers-77969b7d87-64swg" [34a618ff-c7cd-447e-9ef9-32357bcf9e42] Running
	I0401 21:07:55.168462   68904 system_pods.go:89] "calico-node-8lpnw" [75dee764-9af1-4f9d-8248-8f333c9b3a75] Running
	I0401 21:07:55.168466   68904 system_pods.go:89] "coredns-668d6bf9bc-mn944" [fb12f605-c79b-4cdf-92c3-2a3bf9319b9f] Running
	I0401 21:07:55.168469   68904 system_pods.go:89] "etcd-calico-269490" [60128f13-ff1b-43d3-9577-30032cbc1224] Running
	I0401 21:07:55.168472   68904 system_pods.go:89] "kube-apiserver-calico-269490" [7bc4e2df-17c3-4c16-8fc4-6cbd4d194757] Running
	I0401 21:07:55.168475   68904 system_pods.go:89] "kube-controller-manager-calico-269490" [bada65a1-db90-4fe8-b3da-f55647a2a5f5] Running
	I0401 21:07:55.168478   68904 system_pods.go:89] "kube-proxy-clkkm" [20def08e-d6ad-4685-91cf-658019584c13] Running
	I0401 21:07:55.168481   68904 system_pods.go:89] "kube-scheduler-calico-269490" [02f99ab0-d476-4e0a-b12b-b62d8fded21c] Running
	I0401 21:07:55.168484   68904 system_pods.go:89] "storage-provisioner" [dea0b01b-b565-4ea8-b740-28125b3c579c] Running
	I0401 21:07:55.168490   68904 system_pods.go:126] duration metric: took 200.999651ms to wait for k8s-apps to be running ...
	I0401 21:07:55.168499   68904 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:07:55.168548   68904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:07:55.186472   68904 system_svc.go:56] duration metric: took 17.963992ms WaitForService to wait for kubelet
	I0401 21:07:55.186500   68904 kubeadm.go:582] duration metric: took 36.816918566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:07:55.186519   68904 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:07:55.366862   68904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:07:55.366898   68904 node_conditions.go:123] node cpu capacity is 2
	I0401 21:07:55.366915   68904 node_conditions.go:105] duration metric: took 180.387995ms to run NodePressure ...
	I0401 21:07:55.366931   68904 start.go:241] waiting for startup goroutines ...
	I0401 21:07:55.366942   68904 start.go:246] waiting for cluster config update ...
	I0401 21:07:55.366957   68904 start.go:255] writing updated cluster config ...
	I0401 21:07:55.367292   68904 ssh_runner.go:195] Run: rm -f paused
	I0401 21:07:55.418044   68904 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:07:55.419536   68904 out.go:177] * Done! kubectl is now configured to use "calico-269490" cluster and "default" namespace by default
	I0401 21:07:52.552419   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.553020   72096 main.go:141] libmachine: (custom-flannel-269490) found domain IP: 192.168.39.115
	I0401 21:07:52.553043   72096 main.go:141] libmachine: (custom-flannel-269490) reserving static IP address...
	I0401 21:07:52.553055   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has current primary IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.553551   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find host DHCP lease matching {name: "custom-flannel-269490", mac: "52:54:00:bc:3c:1b", ip: "192.168.39.115"} in network mk-custom-flannel-269490
	I0401 21:07:52.633446   72096 main.go:141] libmachine: (custom-flannel-269490) reserved static IP address 192.168.39.115 for domain custom-flannel-269490
	I0401 21:07:52.633469   72096 main.go:141] libmachine: (custom-flannel-269490) waiting for SSH...
	I0401 21:07:52.633478   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Getting to WaitForSSH function...
	I0401 21:07:52.636801   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.637228   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.637263   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.637457   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Using SSH client type: external
	I0401 21:07:52.637483   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa (-rw-------)
	I0401 21:07:52.637524   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 21:07:52.637538   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | About to run SSH command:
	I0401 21:07:52.637570   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | exit 0
	I0401 21:07:52.767648   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | SSH cmd err, output: <nil>: 
	I0401 21:07:52.767922   72096 main.go:141] libmachine: (custom-flannel-269490) KVM machine creation complete
	I0401 21:07:52.768285   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:52.769401   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:52.769639   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:52.769839   72096 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 21:07:52.769855   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:07:52.771616   72096 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 21:07:52.771628   72096 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 21:07:52.771640   72096 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 21:07:52.771646   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:52.773957   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.774313   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.774339   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.774551   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:52.774732   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.774869   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.775003   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:52.775127   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:52.775341   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:52.775351   72096 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 21:07:52.885967   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:52.885995   72096 main.go:141] libmachine: Detecting the provisioner...
	I0401 21:07:52.886036   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:52.889797   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.890333   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.890380   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.890594   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:52.890795   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.891024   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.891176   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:52.891385   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:52.891599   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:52.891613   72096 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 21:07:52.999399   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 21:07:52.999480   72096 main.go:141] libmachine: found compatible host: buildroot
	I0401 21:07:52.999494   72096 main.go:141] libmachine: Provisioning with buildroot...
	I0401 21:07:52.999506   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:52.999737   72096 buildroot.go:166] provisioning hostname "custom-flannel-269490"
	I0401 21:07:52.999763   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:52.999983   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.002673   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.003040   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.003073   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.003201   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.003383   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.003531   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.003684   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.003853   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.004063   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.004074   72096 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-269490 && echo "custom-flannel-269490" | sudo tee /etc/hostname
	I0401 21:07:53.127662   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-269490
	
	I0401 21:07:53.127688   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.130650   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.131060   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.131088   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.131247   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.131442   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.131605   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.131748   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.131909   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.132149   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.132167   72096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-269490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-269490/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-269490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 21:07:53.247895   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:53.247927   72096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 21:07:53.247979   72096 buildroot.go:174] setting up certificates
	I0401 21:07:53.247998   72096 provision.go:84] configureAuth start
	I0401 21:07:53.248027   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:53.248299   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:53.251231   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.251683   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.251709   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.251871   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.254321   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.254634   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.254653   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.254785   72096 provision.go:143] copyHostCerts
	I0401 21:07:53.254838   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 21:07:53.254869   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 21:07:53.254963   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 21:07:53.255070   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 21:07:53.255080   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 21:07:53.255101   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 21:07:53.255172   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 21:07:53.255181   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 21:07:53.255206   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 21:07:53.255307   72096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-269490 san=[127.0.0.1 192.168.39.115 custom-flannel-269490 localhost minikube]
	I0401 21:07:53.423568   72096 provision.go:177] copyRemoteCerts
	I0401 21:07:53.423622   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 21:07:53.423644   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.426471   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.426823   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.426852   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.427026   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.427209   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.427437   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.427602   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:53.508573   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 21:07:53.534446   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 21:07:53.561750   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0401 21:07:53.586361   72096 provision.go:87] duration metric: took 338.347084ms to configureAuth
	I0401 21:07:53.586388   72096 buildroot.go:189] setting minikube options for container-runtime
	I0401 21:07:53.586608   72096 config.go:182] Loaded profile config "custom-flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:53.586686   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.589262   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.589618   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.589647   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.589793   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.589985   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.590141   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.590283   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.590430   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.590630   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.590647   72096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 21:07:53.833008   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 21:07:53.833038   72096 main.go:141] libmachine: Checking connection to Docker...
	I0401 21:07:53.833049   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetURL
	I0401 21:07:53.834302   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | using libvirt version 6000000
	I0401 21:07:53.836570   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.836875   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.836903   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.837075   72096 main.go:141] libmachine: Docker is up and running!
	I0401 21:07:53.837093   72096 main.go:141] libmachine: Reticulating splines...
	I0401 21:07:53.837101   72096 client.go:171] duration metric: took 25.140961475s to LocalClient.Create
	I0401 21:07:53.837125   72096 start.go:167] duration metric: took 25.141025877s to libmachine.API.Create "custom-flannel-269490"
	I0401 21:07:53.837139   72096 start.go:293] postStartSetup for "custom-flannel-269490" (driver="kvm2")
	I0401 21:07:53.837151   72096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 21:07:53.837182   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:53.837406   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 21:07:53.837430   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.839674   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.839944   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.839977   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.840131   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.840293   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.840438   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.840600   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:53.925709   72096 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 21:07:53.930726   72096 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 21:07:53.930754   72096 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 21:07:53.930830   72096 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 21:07:53.930898   72096 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 21:07:53.931007   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 21:07:53.941164   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:53.967167   72096 start.go:296] duration metric: took 130.01591ms for postStartSetup
	I0401 21:07:53.967217   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:53.967908   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:53.970732   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.971053   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.971088   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.971318   72096 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json ...
	I0401 21:07:53.971510   72096 start.go:128] duration metric: took 25.295908261s to createHost
	I0401 21:07:53.971537   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.973863   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.974196   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.974232   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.974386   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.974599   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.974774   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.974910   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.975100   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.975291   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.975302   72096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 21:07:54.083312   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743541674.029447156
	
	I0401 21:07:54.083342   72096 fix.go:216] guest clock: 1743541674.029447156
	I0401 21:07:54.083352   72096 fix.go:229] Guest: 2025-04-01 21:07:54.029447156 +0000 UTC Remote: 2025-04-01 21:07:53.971522792 +0000 UTC m=+33.113971403 (delta=57.924364ms)
	I0401 21:07:54.083375   72096 fix.go:200] guest clock delta is within tolerance: 57.924364ms
	I0401 21:07:54.083382   72096 start.go:83] releasing machines lock for "custom-flannel-269490", held for 25.407944503s
	I0401 21:07:54.083403   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.083645   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:54.086274   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.086622   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.086664   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.086836   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087440   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087609   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087702   72096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 21:07:54.087739   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:54.087821   72096 ssh_runner.go:195] Run: cat /version.json
	I0401 21:07:54.087841   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:54.090554   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.090879   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.090964   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.090990   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.091165   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:54.091298   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.091302   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:54.091344   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.091468   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:54.091525   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:54.091593   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:54.091664   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:54.091714   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:54.091847   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:54.193585   72096 ssh_runner.go:195] Run: systemctl --version
	I0401 21:07:54.199802   72096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 21:07:54.362009   72096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 21:07:54.369775   72096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 21:07:54.369842   72096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 21:07:54.392464   72096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 21:07:54.392493   72096 start.go:495] detecting cgroup driver to use...
	I0401 21:07:54.392575   72096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 21:07:54.415229   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 21:07:54.430169   72096 docker.go:217] disabling cri-docker service (if available) ...
	I0401 21:07:54.430260   72096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 21:07:54.446557   72096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 21:07:54.462441   72096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 21:07:54.581314   72096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 21:07:54.782985   72096 docker.go:233] disabling docker service ...
	I0401 21:07:54.783048   72096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 21:07:54.799920   72096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 21:07:54.817125   72096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 21:07:54.954170   72096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 21:07:55.099520   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 21:07:55.125853   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 21:07:55.147184   72096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 21:07:55.147253   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.158166   72096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 21:07:55.158264   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.169739   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.180580   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.192009   72096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 21:07:55.202863   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.213770   72096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.232492   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.243279   72096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 21:07:55.252819   72096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 21:07:55.252890   72096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 21:07:55.266009   72096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 21:07:55.276185   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:55.393356   72096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 21:07:55.494039   72096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 21:07:55.494118   72096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 21:07:55.499309   72096 start.go:563] Will wait 60s for crictl version
	I0401 21:07:55.499366   72096 ssh_runner.go:195] Run: which crictl
	I0401 21:07:55.503928   72096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 21:07:55.551590   72096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 21:07:55.551671   72096 ssh_runner.go:195] Run: crio --version
	I0401 21:07:55.584117   72096 ssh_runner.go:195] Run: crio --version
	I0401 21:07:55.615306   72096 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 21:07:55.616535   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:55.619254   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:55.619608   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:55.619636   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:55.619847   72096 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 21:07:55.624474   72096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:55.638014   72096 kubeadm.go:883] updating cluster {Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-f
lannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 21:07:55.638113   72096 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:55.638154   72096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:55.671768   72096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 21:07:55.671841   72096 ssh_runner.go:195] Run: which lz4
	I0401 21:07:55.675956   72096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 21:07:55.680087   72096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 21:07:55.680112   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0401 21:07:52.964723   70627 addons.go:514] duration metric: took 1.188041211s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:07:54.369067   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:07:56.867804   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:07:57.258849   72096 crio.go:462] duration metric: took 1.582927832s to copy over tarball
	I0401 21:07:57.258910   72096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 21:07:59.713811   72096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.454879542s)
	I0401 21:07:59.713834   72096 crio.go:469] duration metric: took 2.454960019s to extract the tarball
	I0401 21:07:59.713841   72096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 21:07:59.754131   72096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:59.803175   72096 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 21:07:59.803203   72096 cache_images.go:84] Images are preloaded, skipping loading
	I0401 21:07:59.803211   72096 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.32.2 crio true true} ...
	I0401 21:07:59.803435   72096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-269490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0401 21:07:59.803542   72096 ssh_runner.go:195] Run: crio config
	I0401 21:07:59.859211   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:07:59.859254   72096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 21:07:59.859279   72096 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-269490 NodeName:custom-flannel-269490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 21:07:59.859420   72096 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-269490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.115"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 21:07:59.859485   72096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 21:07:59.872776   72096 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 21:07:59.872854   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 21:07:59.885208   72096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0401 21:07:59.906315   72096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 21:07:59.925314   72096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2301 bytes)
	I0401 21:07:59.945350   72096 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0401 21:07:59.949720   72096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:59.963662   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:08:00.089313   72096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:08:00.110067   72096 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490 for IP: 192.168.39.115
	I0401 21:08:00.110106   72096 certs.go:194] generating shared ca certs ...
	I0401 21:08:00.110120   72096 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.110294   72096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 21:08:00.110353   72096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 21:08:00.110366   72096 certs.go:256] generating profile certs ...
	I0401 21:08:00.110447   72096 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key
	I0401 21:08:00.110464   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt with IP's: []
	I0401 21:08:00.467453   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt ...
	I0401 21:08:00.467488   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: {Name:mk5fce7bdfd13ea831b9ad59ba060161e466fba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.467673   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key ...
	I0401 21:08:00.467686   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key: {Name:mkd84c13916801a689354e72412e009ab37dbcc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.467762   72096 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe
	I0401 21:08:00.467777   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.115]
	I0401 21:08:00.590635   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe ...
	I0401 21:08:00.590669   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe: {Name:mkda99eea5992b7c522818c8e4285bad25863233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.590826   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe ...
	I0401 21:08:00.590839   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe: {Name:mk9b0cf3137043b92f3b27be430ec53f12f6344f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.590912   72096 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt
	I0401 21:08:00.590994   72096 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key
	I0401 21:08:00.591062   72096 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key
	I0401 21:08:00.591077   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt with IP's: []
	I0401 21:08:00.940635   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt ...
	I0401 21:08:00.940673   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt: {Name:mked6a267559570093b231c1df683bf03eedde35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.940870   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key ...
	I0401 21:08:00.940890   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key: {Name:mke111681e05b7c77b9764da674c41796facd6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.941091   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 21:08:00.941141   72096 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 21:08:00.941157   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 21:08:00.941192   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 21:08:00.941230   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 21:08:00.941263   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 21:08:00.941317   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:08:00.941848   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 21:08:01.021801   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 21:08:01.047883   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 21:08:01.076127   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 21:08:01.101880   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 21:08:01.128066   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 21:08:01.155676   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 21:08:01.181194   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 21:08:01.208023   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 21:08:01.235447   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 21:08:01.263882   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 21:08:01.291788   72096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 21:08:01.311432   72096 ssh_runner.go:195] Run: openssl version
	I0401 21:08:01.317827   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 21:08:01.330054   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.335156   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.335215   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.341534   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 21:08:01.353100   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 21:08:01.364974   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.370126   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.370182   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.376077   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 21:08:01.387280   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 21:08:01.398763   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.403624   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.403672   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.409399   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 21:08:01.421319   72096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 21:08:01.426390   72096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 21:08:01.426469   72096 kubeadm.go:392] StartCluster: {Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-flan
nel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:08:01.426539   72096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 21:08:01.426621   72096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:08:01.483618   72096 cri.go:89] found id: ""
	I0401 21:08:01.483709   72096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 21:08:01.497458   72096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:08:01.510064   72096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:08:01.525097   72096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:08:01.525126   72096 kubeadm.go:157] found existing configuration files:
	
	I0401 21:08:01.525187   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:08:01.538475   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:08:01.538537   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:08:01.549865   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:08:01.564435   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:08:01.564512   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:08:01.577112   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:08:01.588654   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:08:01.588723   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:08:01.600399   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:08:01.611302   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:08:01.611382   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:08:01.626795   72096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:08:01.706166   72096 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 21:08:01.706290   72096 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:08:01.816483   72096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:08:01.816607   72096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:08:01.816718   72096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 21:08:01.826517   72096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:07:59.368327   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:01.867707   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:01.944921   72096 out.go:235]   - Generating certificates and keys ...
	I0401 21:08:01.945033   72096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:08:01.945102   72096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:08:01.997637   72096 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 21:08:02.082193   72096 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 21:08:02.370051   72096 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 21:08:02.610131   72096 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 21:08:02.813327   72096 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 21:08:02.813505   72096 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-269490 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0401 21:08:02.959340   72096 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 21:08:02.959508   72096 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-269490 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0401 21:08:03.064528   72096 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 21:08:03.321464   72096 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 21:08:03.362989   72096 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 21:08:03.363077   72096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:08:03.478482   72096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:08:03.742329   72096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 21:08:03.877782   72096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:08:04.064813   72096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:08:04.137063   72096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:08:04.137482   72096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:08:04.141208   72096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:08:04.143036   72096 out.go:235]   - Booting up control plane ...
	I0401 21:08:04.143157   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:08:04.144620   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:08:04.145423   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:08:04.172192   72096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:08:04.183885   72096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:08:04.183985   72096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:08:04.340951   72096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 21:08:04.341118   72096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 21:08:04.842463   72096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.673213ms
	I0401 21:08:04.842565   72096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 21:08:03.867783   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:05.868899   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:10.848073   72096 kubeadm.go:310] [api-check] The API server is healthy after 6.003303805s
	I0401 21:08:10.859890   72096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 21:08:10.875896   72096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 21:08:10.906682   72096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 21:08:10.906981   72096 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-269490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 21:08:10.931670   72096 kubeadm.go:310] [bootstrap-token] Using token: y1rxzx.ol9rd2e05i88tezo
	I0401 21:08:07.870418   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:09.374853   70627 node_ready.go:49] node "kindnet-269490" has status "Ready":"True"
	I0401 21:08:09.374880   70627 node_ready.go:38] duration metric: took 17.010513164s for node "kindnet-269490" to be "Ready" ...
	I0401 21:08:09.374892   70627 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:09.378622   70627 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.383841   70627 pod_ready.go:93] pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.383869   70627 pod_ready.go:82] duration metric: took 1.005212656s for pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.383881   70627 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.388202   70627 pod_ready.go:93] pod "etcd-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.388230   70627 pod_ready.go:82] duration metric: took 4.341416ms for pod "etcd-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.388246   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.393029   70627 pod_ready.go:93] pod "kube-apiserver-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.393061   70627 pod_ready.go:82] duration metric: took 4.797935ms for pod "kube-apiserver-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.393076   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.397690   70627 pod_ready.go:93] pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.397711   70627 pod_ready.go:82] duration metric: took 4.626561ms for pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.397722   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-b5cp4" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.570047   70627 pod_ready.go:93] pod "kube-proxy-b5cp4" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.570070   70627 pod_ready.go:82] duration metric: took 172.341286ms for pod "kube-proxy-b5cp4" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.570080   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.969135   70627 pod_ready.go:93] pod "kube-scheduler-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.969167   70627 pod_ready.go:82] duration metric: took 399.078827ms for pod "kube-scheduler-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.969182   70627 pod_ready.go:39] duration metric: took 1.594272558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:10.969200   70627 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:08:10.969260   70627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:08:10.986425   70627 api_server.go:72] duration metric: took 19.20965796s to wait for apiserver process to appear ...
	I0401 21:08:10.986449   70627 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:08:10.986476   70627 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0401 21:08:10.991890   70627 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0401 21:08:10.993199   70627 api_server.go:141] control plane version: v1.32.2
	I0401 21:08:10.993221   70627 api_server.go:131] duration metric: took 6.765166ms to wait for apiserver health ...
	I0401 21:08:10.993228   70627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:08:11.169752   70627 system_pods.go:59] 8 kube-system pods found
	I0401 21:08:11.169784   70627 system_pods.go:61] "coredns-668d6bf9bc-fqk9t" [1aa997a2-044b-4f1e-bd5f-eb88acdcd380] Running
	I0401 21:08:11.169789   70627 system_pods.go:61] "etcd-kindnet-269490" [6eb8dc71-efc6-40e9-89db-6947499e653f] Running
	I0401 21:08:11.169793   70627 system_pods.go:61] "kindnet-nqt4k" [77a8572e-36d9-4789-a305-c00c892b67ea] Running
	I0401 21:08:11.169796   70627 system_pods.go:61] "kube-apiserver-kindnet-269490" [35601d6b-2485-45ff-b906-80cd3d73bb50] Running
	I0401 21:08:11.169800   70627 system_pods.go:61] "kube-controller-manager-kindnet-269490" [75f07631-fab7-404a-b309-4ea7d2af791e] Running
	I0401 21:08:11.169803   70627 system_pods.go:61] "kube-proxy-b5cp4" [dce5a6b6-9133-4a63-b683-ffbe803e9481] Running
	I0401 21:08:11.169806   70627 system_pods.go:61] "kube-scheduler-kindnet-269490" [313714c7-ef0d-4991-b38e-7ea5d1815849] Running
	I0401 21:08:11.169808   70627 system_pods.go:61] "storage-provisioner" [39894cc3-b618-4ee1-8a46-7065c914830c] Running
	I0401 21:08:11.169816   70627 system_pods.go:74] duration metric: took 176.581209ms to wait for pod list to return data ...
	I0401 21:08:11.169825   70627 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:08:11.370607   70627 default_sa.go:45] found service account: "default"
	I0401 21:08:11.370635   70627 default_sa.go:55] duration metric: took 200.803332ms for default service account to be created ...
	I0401 21:08:11.370646   70627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:08:11.570070   70627 system_pods.go:86] 8 kube-system pods found
	I0401 21:08:11.570099   70627 system_pods.go:89] "coredns-668d6bf9bc-fqk9t" [1aa997a2-044b-4f1e-bd5f-eb88acdcd380] Running
	I0401 21:08:11.570105   70627 system_pods.go:89] "etcd-kindnet-269490" [6eb8dc71-efc6-40e9-89db-6947499e653f] Running
	I0401 21:08:11.570109   70627 system_pods.go:89] "kindnet-nqt4k" [77a8572e-36d9-4789-a305-c00c892b67ea] Running
	I0401 21:08:11.570112   70627 system_pods.go:89] "kube-apiserver-kindnet-269490" [35601d6b-2485-45ff-b906-80cd3d73bb50] Running
	I0401 21:08:11.570116   70627 system_pods.go:89] "kube-controller-manager-kindnet-269490" [75f07631-fab7-404a-b309-4ea7d2af791e] Running
	I0401 21:08:11.570118   70627 system_pods.go:89] "kube-proxy-b5cp4" [dce5a6b6-9133-4a63-b683-ffbe803e9481] Running
	I0401 21:08:11.570122   70627 system_pods.go:89] "kube-scheduler-kindnet-269490" [313714c7-ef0d-4991-b38e-7ea5d1815849] Running
	I0401 21:08:11.570125   70627 system_pods.go:89] "storage-provisioner" [39894cc3-b618-4ee1-8a46-7065c914830c] Running
	I0401 21:08:11.570132   70627 system_pods.go:126] duration metric: took 199.479575ms to wait for k8s-apps to be running ...
	I0401 21:08:11.570138   70627 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:08:11.570180   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:08:11.587544   70627 system_svc.go:56] duration metric: took 17.395489ms WaitForService to wait for kubelet
	I0401 21:08:11.587581   70627 kubeadm.go:582] duration metric: took 19.810818504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:08:11.587624   70627 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:08:11.769946   70627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:08:11.769972   70627 node_conditions.go:123] node cpu capacity is 2
	I0401 21:08:11.769983   70627 node_conditions.go:105] duration metric: took 182.353276ms to run NodePressure ...
	I0401 21:08:11.769993   70627 start.go:241] waiting for startup goroutines ...
	I0401 21:08:11.770001   70627 start.go:246] waiting for cluster config update ...
	I0401 21:08:11.770014   70627 start.go:255] writing updated cluster config ...
	I0401 21:08:11.770327   70627 ssh_runner.go:195] Run: rm -f paused
	I0401 21:08:11.821228   70627 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:08:11.823026   70627 out.go:177] * Done! kubectl is now configured to use "kindnet-269490" cluster and "default" namespace by default
	I0401 21:08:10.933219   72096 out.go:235]   - Configuring RBAC rules ...
	I0401 21:08:10.933389   72096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 21:08:10.953572   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 21:08:10.970295   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 21:08:10.974769   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 21:08:10.978152   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 21:08:10.982485   72096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 21:08:11.255128   72096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 21:08:11.700130   72096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 21:08:12.254377   72096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 21:08:12.254408   72096 kubeadm.go:310] 
	I0401 21:08:12.254498   72096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 21:08:12.254529   72096 kubeadm.go:310] 
	I0401 21:08:12.254681   72096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 21:08:12.254700   72096 kubeadm.go:310] 
	I0401 21:08:12.254729   72096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 21:08:12.254812   72096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 21:08:12.254885   72096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 21:08:12.254895   72096 kubeadm.go:310] 
	I0401 21:08:12.254989   72096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 21:08:12.254999   72096 kubeadm.go:310] 
	I0401 21:08:12.255069   72096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 21:08:12.255078   72096 kubeadm.go:310] 
	I0401 21:08:12.255148   72096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 21:08:12.255261   72096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 21:08:12.255357   72096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 21:08:12.255368   72096 kubeadm.go:310] 
	I0401 21:08:12.255483   72096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 21:08:12.255610   72096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 21:08:12.255631   72096 kubeadm.go:310] 
	I0401 21:08:12.255741   72096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y1rxzx.ol9rd2e05i88tezo \
	I0401 21:08:12.255881   72096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 \
	I0401 21:08:12.255916   72096 kubeadm.go:310] 	--control-plane 
	I0401 21:08:12.255926   72096 kubeadm.go:310] 
	I0401 21:08:12.256021   72096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 21:08:12.256030   72096 kubeadm.go:310] 
	I0401 21:08:12.256150   72096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y1rxzx.ol9rd2e05i88tezo \
	I0401 21:08:12.256298   72096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 
	I0401 21:08:12.257066   72096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:08:12.257093   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:08:12.259236   72096 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0401 21:08:12.260686   72096 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 21:08:12.260745   72096 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0401 21:08:12.267034   72096 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0401 21:08:12.267068   72096 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0401 21:08:12.296900   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 21:08:12.848752   72096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 21:08:12.848860   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:12.848947   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-269490 minikube.k8s.io/updated_at=2025_04_01T21_08_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=custom-flannel-269490 minikube.k8s.io/primary=true
	I0401 21:08:12.877431   72096 ops.go:34] apiserver oom_adj: -16
	I0401 21:08:12.985187   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:13.485414   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:13.985981   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:14.485489   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:14.985825   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:15.485827   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:15.985754   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:16.148701   72096 kubeadm.go:1113] duration metric: took 3.299903142s to wait for elevateKubeSystemPrivileges
	I0401 21:08:16.148749   72096 kubeadm.go:394] duration metric: took 14.722285454s to StartCluster
	I0401 21:08:16.148769   72096 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:16.148863   72096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:08:16.150194   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:16.150504   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 21:08:16.150507   72096 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:08:16.150594   72096 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 21:08:16.150716   72096 config.go:182] Loaded profile config "custom-flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:08:16.150735   72096 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-269490"
	I0401 21:08:16.150760   72096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-269490"
	I0401 21:08:16.150715   72096 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-269490"
	I0401 21:08:16.150863   72096 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-269490"
	I0401 21:08:16.150890   72096 host.go:66] Checking if "custom-flannel-269490" exists ...
	I0401 21:08:16.151250   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.151283   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.151250   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.151392   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.152288   72096 out.go:177] * Verifying Kubernetes components...
	I0401 21:08:16.153941   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:08:16.167829   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0401 21:08:16.167856   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0401 21:08:16.168243   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.168391   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.168828   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.168843   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.168868   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.168884   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.169237   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.169245   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.169517   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.169824   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.169861   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.172742   72096 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-269490"
	I0401 21:08:16.172773   72096 host.go:66] Checking if "custom-flannel-269490" exists ...
	I0401 21:08:16.172999   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.173021   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.187721   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0401 21:08:16.188253   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.188750   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.188774   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.189282   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.189445   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.189724   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0401 21:08:16.190201   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.190710   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.190728   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.191093   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.191453   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:08:16.191654   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.191689   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.192999   72096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:08:16.194424   72096 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:08:16.194442   72096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 21:08:16.194461   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:08:16.197511   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.198005   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:08:16.198041   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.198238   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:08:16.198409   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:08:16.198748   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:08:16.198918   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:08:16.207703   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0401 21:08:16.208135   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.208589   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.208612   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.209006   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.209189   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.211107   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:08:16.211344   72096 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 21:08:16.211365   72096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 21:08:16.211385   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:08:16.213813   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.214123   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:08:16.214151   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.214296   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:08:16.214499   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:08:16.214910   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:08:16.215227   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:08:16.590199   72096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:08:16.590208   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 21:08:16.643763   72096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 21:08:16.713804   72096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:08:17.209943   72096 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 21:08:17.210084   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.210105   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.210495   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.210517   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.210528   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.210536   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.210760   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.210776   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.211295   72096 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-269490" to be "Ready" ...
	I0401 21:08:17.251129   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.251163   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.251515   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.251537   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.251546   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.513636   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.513660   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.515627   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.515656   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.515670   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.515670   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.515679   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.515935   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.515951   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.515959   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.517748   72096 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 21:08:17.519567   72096 addons.go:514] duration metric: took 1.36897309s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:08:17.714019   72096 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-269490" context rescaled to 1 replicas
	I0401 21:08:19.214809   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:21.214845   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:23.715394   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:25.752337   72096 node_ready.go:49] node "custom-flannel-269490" has status "Ready":"True"
	I0401 21:08:25.752361   72096 node_ready.go:38] duration metric: took 8.541004401s for node "custom-flannel-269490" to be "Ready" ...
	I0401 21:08:25.752373   72096 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:25.781711   72096 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:27.788318   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:29.789254   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:32.287111   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:34.287266   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:36.288139   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:38.788164   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:39.288278   72096 pod_ready.go:93] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.288311   72096 pod_ready.go:82] duration metric: took 13.506568961s for pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.288323   72096 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.293894   72096 pod_ready.go:93] pod "etcd-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.293914   72096 pod_ready.go:82] duration metric: took 5.583334ms for pod "etcd-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.293922   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.299231   72096 pod_ready.go:93] pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.299260   72096 pod_ready.go:82] duration metric: took 5.329864ms for pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.299273   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.303589   72096 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.303611   72096 pod_ready.go:82] duration metric: took 4.329184ms for pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.303626   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7mfxw" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.307588   72096 pod_ready.go:93] pod "kube-proxy-7mfxw" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.307608   72096 pod_ready.go:82] duration metric: took 3.974955ms for pod "kube-proxy-7mfxw" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.307619   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.686205   72096 pod_ready.go:93] pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.686262   72096 pod_ready.go:82] duration metric: took 378.634734ms for pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.686278   72096 pod_ready.go:39] duration metric: took 13.933890743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:39.686295   72096 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:08:39.686354   72096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:08:39.707371   72096 api_server.go:72] duration metric: took 23.556833358s to wait for apiserver process to appear ...
	I0401 21:08:39.707408   72096 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:08:39.707430   72096 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0401 21:08:39.712196   72096 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0401 21:08:39.713175   72096 api_server.go:141] control plane version: v1.32.2
	I0401 21:08:39.713206   72096 api_server.go:131] duration metric: took 5.790036ms to wait for apiserver health ...
	I0401 21:08:39.713216   72096 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:08:39.887726   72096 system_pods.go:59] 7 kube-system pods found
	I0401 21:08:39.887756   72096 system_pods.go:61] "coredns-668d6bf9bc-5mj4j" [36eeaf01-f8b5-4b27-a127-3e8e6fb6fe55] Running
	I0401 21:08:39.887763   72096 system_pods.go:61] "etcd-custom-flannel-269490" [13ff3a81-1ab8-47ea-9773-5d96ece48b42] Running
	I0401 21:08:39.887768   72096 system_pods.go:61] "kube-apiserver-custom-flannel-269490" [6593f2ea-974b-4d95-89ea-5231ae3f8f9a] Running
	I0401 21:08:39.887773   72096 system_pods.go:61] "kube-controller-manager-custom-flannel-269490" [badd65c7-6a1d-4ac6-8e2b-81b0523d520a] Running
	I0401 21:08:39.887777   72096 system_pods.go:61] "kube-proxy-7mfxw" [1b07ba12-0e06-432e-b1ef-6712daa0aceb] Running
	I0401 21:08:39.887786   72096 system_pods.go:61] "kube-scheduler-custom-flannel-269490" [c28fe18b-4d5e-481c-9f77-897e84bdc147] Running
	I0401 21:08:39.887791   72096 system_pods.go:61] "storage-provisioner" [23315522-a502-4852-98ec-9589e819d09c] Running
	I0401 21:08:39.887799   72096 system_pods.go:74] duration metric: took 174.575758ms to wait for pod list to return data ...
	I0401 21:08:39.887809   72096 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:08:40.086898   72096 default_sa.go:45] found service account: "default"
	I0401 21:08:40.086922   72096 default_sa.go:55] duration metric: took 199.10767ms for default service account to be created ...
	I0401 21:08:40.086932   72096 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:08:40.287384   72096 system_pods.go:86] 7 kube-system pods found
	I0401 21:08:40.287416   72096 system_pods.go:89] "coredns-668d6bf9bc-5mj4j" [36eeaf01-f8b5-4b27-a127-3e8e6fb6fe55] Running
	I0401 21:08:40.287421   72096 system_pods.go:89] "etcd-custom-flannel-269490" [13ff3a81-1ab8-47ea-9773-5d96ece48b42] Running
	I0401 21:08:40.287425   72096 system_pods.go:89] "kube-apiserver-custom-flannel-269490" [6593f2ea-974b-4d95-89ea-5231ae3f8f9a] Running
	I0401 21:08:40.287429   72096 system_pods.go:89] "kube-controller-manager-custom-flannel-269490" [badd65c7-6a1d-4ac6-8e2b-81b0523d520a] Running
	I0401 21:08:40.287432   72096 system_pods.go:89] "kube-proxy-7mfxw" [1b07ba12-0e06-432e-b1ef-6712daa0aceb] Running
	I0401 21:08:40.287435   72096 system_pods.go:89] "kube-scheduler-custom-flannel-269490" [c28fe18b-4d5e-481c-9f77-897e84bdc147] Running
	I0401 21:08:40.287438   72096 system_pods.go:89] "storage-provisioner" [23315522-a502-4852-98ec-9589e819d09c] Running
	I0401 21:08:40.287443   72096 system_pods.go:126] duration metric: took 200.50653ms to wait for k8s-apps to be running ...
	I0401 21:08:40.287450   72096 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:08:40.287503   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:08:40.303609   72096 system_svc.go:56] duration metric: took 16.150777ms WaitForService to wait for kubelet
	I0401 21:08:40.303639   72096 kubeadm.go:582] duration metric: took 24.153106492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:08:40.303665   72096 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:08:40.486884   72096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:08:40.486919   72096 node_conditions.go:123] node cpu capacity is 2
	I0401 21:08:40.486933   72096 node_conditions.go:105] duration metric: took 183.261884ms to run NodePressure ...
	I0401 21:08:40.486946   72096 start.go:241] waiting for startup goroutines ...
	I0401 21:08:40.486955   72096 start.go:246] waiting for cluster config update ...
	I0401 21:08:40.486969   72096 start.go:255] writing updated cluster config ...
	I0401 21:08:40.487283   72096 ssh_runner.go:195] Run: rm -f paused
	I0401 21:08:40.546242   72096 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:08:40.548286   72096 out.go:177] * Done! kubectl is now configured to use "custom-flannel-269490" cluster and "default" namespace by default
	I0401 21:08:44.694071   61496 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 21:08:44.694235   61496 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 21:08:44.695734   61496 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 21:08:44.695829   61496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:08:44.695942   61496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:08:44.696082   61496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:08:44.696333   61496 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 21:08:44.696433   61496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:08:44.698422   61496 out.go:235]   - Generating certificates and keys ...
	I0401 21:08:44.698535   61496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:08:44.698622   61496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:08:44.698707   61496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 21:08:44.698782   61496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0401 21:08:44.698848   61496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 21:08:44.698894   61496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0401 21:08:44.698952   61496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0401 21:08:44.699004   61496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0401 21:08:44.699067   61496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 21:08:44.699131   61496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 21:08:44.699164   61496 kubeadm.go:310] [certs] Using the existing "sa" key
	I0401 21:08:44.699213   61496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:08:44.699257   61496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:08:44.699302   61496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:08:44.699360   61496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:08:44.699410   61496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:08:44.699518   61496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:08:44.699595   61496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:08:44.699630   61496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:08:44.699705   61496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:08:44.701085   61496 out.go:235]   - Booting up control plane ...
	I0401 21:08:44.701182   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:08:44.701269   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:08:44.701370   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:08:44.701492   61496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:08:44.701663   61496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 21:08:44.701710   61496 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 21:08:44.701768   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.701969   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702033   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702244   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702341   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702570   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702639   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702818   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702922   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.703238   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.703248   61496 kubeadm.go:310] 
	I0401 21:08:44.703300   61496 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 21:08:44.703339   61496 kubeadm.go:310] 		timed out waiting for the condition
	I0401 21:08:44.703347   61496 kubeadm.go:310] 
	I0401 21:08:44.703393   61496 kubeadm.go:310] 	This error is likely caused by:
	I0401 21:08:44.703424   61496 kubeadm.go:310] 		- The kubelet is not running
	I0401 21:08:44.703575   61496 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 21:08:44.703594   61496 kubeadm.go:310] 
	I0401 21:08:44.703747   61496 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 21:08:44.703797   61496 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 21:08:44.703843   61496 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 21:08:44.703851   61496 kubeadm.go:310] 
	I0401 21:08:44.703979   61496 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 21:08:44.704106   61496 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 21:08:44.704117   61496 kubeadm.go:310] 
	I0401 21:08:44.704223   61496 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 21:08:44.704338   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 21:08:44.704400   61496 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 21:08:44.704458   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 21:08:44.704515   61496 kubeadm.go:394] duration metric: took 8m1.369559682s to StartCluster
	I0401 21:08:44.704550   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:08:44.704601   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:08:44.704607   61496 kubeadm.go:310] 
	I0401 21:08:44.776607   61496 cri.go:89] found id: ""
	I0401 21:08:44.776631   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.776638   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:08:44.776643   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:08:44.776688   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:08:44.822697   61496 cri.go:89] found id: ""
	I0401 21:08:44.822724   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.822732   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:08:44.822737   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:08:44.822789   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:08:44.870855   61496 cri.go:89] found id: ""
	I0401 21:08:44.870884   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.870895   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:08:44.870903   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:08:44.870963   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:08:44.909983   61496 cri.go:89] found id: ""
	I0401 21:08:44.910010   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.910019   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:08:44.910025   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:08:44.910205   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:08:44.947636   61496 cri.go:89] found id: ""
	I0401 21:08:44.947667   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.947677   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:08:44.947684   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:08:44.947742   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:08:44.987225   61496 cri.go:89] found id: ""
	I0401 21:08:44.987254   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.987265   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:08:44.987273   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:08:44.987328   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:08:45.031455   61496 cri.go:89] found id: ""
	I0401 21:08:45.031483   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.031493   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:08:45.031498   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:08:45.031556   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:08:45.073545   61496 cri.go:89] found id: ""
	I0401 21:08:45.073572   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.073582   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:08:45.073593   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:08:45.073604   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:08:45.139059   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:08:45.139110   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:08:45.156271   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:08:45.156309   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:08:45.239654   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:08:45.239682   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:08:45.239697   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:08:45.355473   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:08:45.355501   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0401 21:08:45.401208   61496 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 21:08:45.401255   61496 out.go:270] * 
	W0401 21:08:45.401306   61496 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.401323   61496 out.go:270] * 
	W0401 21:08:45.402124   61496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 21:08:45.405265   61496 out.go:201] 
	W0401 21:08:45.406413   61496 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.406448   61496 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 21:08:45.406470   61496 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 21:08:45.407866   61496 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.001164169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542268001139430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcc29844-3fa7-48d3-81f3-8b28493745ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.001762728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f9e3832-0ec3-4b0c-a74b-6b20b9a726a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.001817227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f9e3832-0ec3-4b0c-a74b-6b20b9a726a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.001853305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0f9e3832-0ec3-4b0c-a74b-6b20b9a726a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.035272407Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef9d0c12-aa60-4710-88f1-0bc310ec998c name=/runtime.v1.RuntimeService/Version
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.035370977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef9d0c12-aa60-4710-88f1-0bc310ec998c name=/runtime.v1.RuntimeService/Version
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.036593769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fb64f03-1e46-48d3-9bfd-684d4db71af4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.037074654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542268037048931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fb64f03-1e46-48d3-9bfd-684d4db71af4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.037599128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47208173-6d1e-473f-b366-99f276e6b468 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.037650858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47208173-6d1e-473f-b366-99f276e6b468 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.037687200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=47208173-6d1e-473f-b366-99f276e6b468 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.076657856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1126bd2-50a2-4f61-8c42-232e2bd6b6a4 name=/runtime.v1.RuntimeService/Version
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.076741249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1126bd2-50a2-4f61-8c42-232e2bd6b6a4 name=/runtime.v1.RuntimeService/Version
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.077889167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2299868-fcfd-4fcb-89e7-875ce8c46b15 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.078380620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542268078349346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2299868-fcfd-4fcb-89e7-875ce8c46b15 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.079050203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22e0f298-10a4-43c2-9a14-bdd77d7e8ef0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.079113785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22e0f298-10a4-43c2-9a14-bdd77d7e8ef0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.079145816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=22e0f298-10a4-43c2-9a14-bdd77d7e8ef0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.113365326Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7e5df0f-7ae9-4167-b0c8-b62a671e1882 name=/runtime.v1.RuntimeService/Version
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.113456655Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7e5df0f-7ae9-4167-b0c8-b62a671e1882 name=/runtime.v1.RuntimeService/Version
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.115136510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdb80f55-a4a3-4b1c-a02c-baa008fa0ec9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.115509601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542268115483939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdb80f55-a4a3-4b1c-a02c-baa008fa0ec9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.116148818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5947965-4d11-4312-b7c9-a226cfeb8059 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.116205910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5947965-4d11-4312-b7c9-a226cfeb8059 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:17:48 old-k8s-version-582207 crio[644]: time="2025-04-01 21:17:48.116243935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a5947965-4d11-4312-b7c9-a226cfeb8059 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 21:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054135] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041531] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.204738] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.959861] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.661664] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.677904] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068300] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079515] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.190777] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.171995] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.258506] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.231600] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.068848] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.731705] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[ +11.880365] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 1 21:04] systemd-fstab-generator[5005]: Ignoring "noauto" option for root device
	[Apr 1 21:06] systemd-fstab-generator[5279]: Ignoring "noauto" option for root device
	[  +0.075307] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:17:48 up 17 min,  0 users,  load average: 0.02, 0.02, 0.02
	Linux old-k8s-version-582207 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net.(*sysDialer).dialTCP(0xc00090f180, 0x4f7fe40, 0xc00048a1e0, 0x0, 0xc0008cb7a0, 0x57b620, 0x48ab5d6, 0x7fd373759738)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net.(*sysDialer).dialSingle(0xc00090f180, 0x4f7fe40, 0xc00048a1e0, 0x4f1ff00, 0xc0008cb7a0, 0x0, 0x0, 0x0, 0x0)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net.(*sysDialer).dialSerial(0xc00090f180, 0x4f7fe40, 0xc00048a1e0, 0xc00097f640, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/dial.go:548 +0x152
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net.(*Dialer).DialContext(0xc000b8a9c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c6f860, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b8fda0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c6f860, 0x24, 0x60, 0x7fd373509440, 0x118, ...)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net/http.(*Transport).dial(0xc0008ae000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000c6f860, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net/http.(*Transport).dialConn(0xc0008ae000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00094e3c0, 0x5, 0xc000c6f860, 0x24, 0x0, 0xc000c2ed80, ...)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net/http.(*Transport).dialConnFor(0xc0008ae000, 0xc000118210)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: created by net/http.(*Transport).queueForDial
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: goroutine 112 [select]:
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000bd7b00, 0xc00090f400, 0xc000714ae0, 0xc000714a80)
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]: created by net.(*netFD).connect
	Apr 01 21:17:47 old-k8s-version-582207 kubelet[6457]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 01 21:17:47 old-k8s-version-582207 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 01 21:17:47 old-k8s-version-582207 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (238.412666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-582207" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (355.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:17:55.436613   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:18:11.843439   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:18:23.138624   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:18:39.544483   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:18:41.069874   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:19:06.657796   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:19:08.771916   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:20:24.181563   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/auto-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:20:54.980846   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:20:55.863196   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:21:34.668853   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/enable-default-cni-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:21:39.529934   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:21:53.407020   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/bridge-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:22:18.928221   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:22:27.798788   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:22:55.436620   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/calico-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:23:02.592826   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:23:11.843639   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
E0401 21:23:41.069805   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.128:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (227.260565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-582207" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-582207 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-582207 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.39µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-582207 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (233.545653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-582207 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo cat                    | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo cat                    | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo cat                    | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-269490 sudo                        | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-269490                             | custom-flannel-269490 | jenkins | v1.35.0 | 01 Apr 25 21:09 UTC | 01 Apr 25 21:09 UTC |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 21:07:20
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 21:07:20.892475   72096 out.go:345] Setting OutFile to fd 1 ...
	I0401 21:07:20.892577   72096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:07:20.892588   72096 out.go:358] Setting ErrFile to fd 2...
	I0401 21:07:20.892592   72096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 21:07:20.892779   72096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 21:07:20.893387   72096 out.go:352] Setting JSON to false
	I0401 21:07:20.894914   72096 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6585,"bootTime":1743535056,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 21:07:20.895074   72096 start.go:139] virtualization: kvm guest
	I0401 21:07:20.896928   72096 out.go:177] * [custom-flannel-269490] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 21:07:20.898151   72096 notify.go:220] Checking for updates...
	I0401 21:07:20.898184   72096 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 21:07:20.899289   72096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 21:07:20.900337   72096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:07:20.901554   72096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:20.902784   72096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 21:07:20.903866   72096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 21:07:20.905447   72096 config.go:182] Loaded profile config "calico-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:20.905560   72096 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:20.905643   72096 config.go:182] Loaded profile config "old-k8s-version-582207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 21:07:20.905706   72096 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 21:07:20.945212   72096 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 21:07:20.946413   72096 start.go:297] selected driver: kvm2
	I0401 21:07:20.946434   72096 start.go:901] validating driver "kvm2" against <nil>
	I0401 21:07:20.946446   72096 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 21:07:20.947178   72096 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:07:20.947262   72096 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 21:07:20.963919   72096 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 21:07:20.963985   72096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 21:07:20.964232   72096 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:07:20.964268   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:07:20.964285   72096 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0401 21:07:20.964365   72096 start.go:340] cluster config:
	{Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:07:20.964523   72096 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 21:07:20.966047   72096 out.go:177] * Starting "custom-flannel-269490" primary control-plane node in "custom-flannel-269490" cluster
	I0401 21:07:18.476294   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:18.476788   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find current IP address of domain kindnet-269490 in network mk-kindnet-269490
	I0401 21:07:18.476808   70627 main.go:141] libmachine: (kindnet-269490) DBG | I0401 21:07:18.476765   70649 retry.go:31] will retry after 3.122657647s: waiting for domain to come up
	I0401 21:07:21.603058   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:21.603568   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find current IP address of domain kindnet-269490 in network mk-kindnet-269490
	I0401 21:07:21.603587   70627 main.go:141] libmachine: (kindnet-269490) DBG | I0401 21:07:21.603538   70649 retry.go:31] will retry after 5.429623003s: waiting for domain to come up
	I0401 21:07:19.747355   68904 addons.go:514] duration metric: took 1.377747901s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:07:19.754995   68904 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-269490" context rescaled to 1 replicas
	I0401 21:07:21.254062   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:23.254170   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:20.967052   72096 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:20.967100   72096 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 21:07:20.967109   72096 cache.go:56] Caching tarball of preloaded images
	I0401 21:07:20.967208   72096 preload.go:172] Found /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 21:07:20.967221   72096 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 21:07:20.967324   72096 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json ...
	I0401 21:07:20.967350   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json: {Name:mkabbd5fa26c3d0a0e3ad8206cce24911ddf4ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:20.967473   72096 start.go:360] acquireMachinesLock for custom-flannel-269490: {Name:mk0a84ef580ee5c540e424c8d0c10ea2dd8b59a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 21:07:27.036122   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.036704   70627 main.go:141] libmachine: (kindnet-269490) found domain IP: 192.168.72.200
	I0401 21:07:27.036728   70627 main.go:141] libmachine: (kindnet-269490) reserving static IP address...
	I0401 21:07:27.036741   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has current primary IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.037062   70627 main.go:141] libmachine: (kindnet-269490) DBG | unable to find host DHCP lease matching {name: "kindnet-269490", mac: "52:54:00:a7:37:c0", ip: "192.168.72.200"} in network mk-kindnet-269490
	I0401 21:07:27.112813   70627 main.go:141] libmachine: (kindnet-269490) DBG | Getting to WaitForSSH function...
	I0401 21:07:27.112843   70627 main.go:141] libmachine: (kindnet-269490) reserved static IP address 192.168.72.200 for domain kindnet-269490
	I0401 21:07:27.112872   70627 main.go:141] libmachine: (kindnet-269490) waiting for SSH...
	I0401 21:07:27.115323   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.115796   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.115923   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.115950   70627 main.go:141] libmachine: (kindnet-269490) DBG | Using SSH client type: external
	I0401 21:07:27.115972   70627 main.go:141] libmachine: (kindnet-269490) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa (-rw-------)
	I0401 21:07:27.115994   70627 main.go:141] libmachine: (kindnet-269490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 21:07:27.116012   70627 main.go:141] libmachine: (kindnet-269490) DBG | About to run SSH command:
	I0401 21:07:27.116025   70627 main.go:141] libmachine: (kindnet-269490) DBG | exit 0
	I0401 21:07:28.675412   72096 start.go:364] duration metric: took 7.707851568s to acquireMachinesLock for "custom-flannel-269490"
	I0401 21:07:28.675471   72096 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:07:28.675590   72096 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 21:07:25.472985   68904 node_ready.go:53] node "calico-269490" has status "Ready":"False"
	I0401 21:07:27.253847   68904 node_ready.go:49] node "calico-269490" has status "Ready":"True"
	I0401 21:07:27.253864   68904 node_ready.go:38] duration metric: took 8.003199629s for node "calico-269490" to be "Ready" ...
	I0401 21:07:27.253872   68904 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:07:27.257050   68904 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:27.242376   70627 main.go:141] libmachine: (kindnet-269490) DBG | SSH cmd err, output: <nil>: 
	I0401 21:07:27.242647   70627 main.go:141] libmachine: (kindnet-269490) KVM machine creation complete
	I0401 21:07:27.242954   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetConfigRaw
	I0401 21:07:27.243418   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:27.243604   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:27.243762   70627 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 21:07:27.243775   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:27.245022   70627 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 21:07:27.245035   70627 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 21:07:27.245039   70627 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 21:07:27.245044   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.247141   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.247552   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.247576   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.247767   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.247943   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.248079   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.248204   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.248336   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.248568   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.248579   70627 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 21:07:27.345624   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:27.345651   70627 main.go:141] libmachine: Detecting the provisioner...
	I0401 21:07:27.345668   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.348762   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.349156   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.349177   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.349442   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.349668   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.349845   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.349977   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.350143   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.350384   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.350397   70627 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 21:07:27.455197   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 21:07:27.455275   70627 main.go:141] libmachine: found compatible host: buildroot
	I0401 21:07:27.455286   70627 main.go:141] libmachine: Provisioning with buildroot...
	I0401 21:07:27.455296   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.455573   70627 buildroot.go:166] provisioning hostname "kindnet-269490"
	I0401 21:07:27.455600   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.455807   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.458178   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.458482   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.458501   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.458727   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.458935   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.459090   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.459252   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.459383   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.459600   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.459612   70627 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-269490 && echo "kindnet-269490" | sudo tee /etc/hostname
	I0401 21:07:27.580784   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-269490
	
	I0401 21:07:27.580810   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.583963   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.584471   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.584501   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.584766   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:27.584991   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.585193   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:27.585384   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:27.585564   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:27.585756   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:27.585773   70627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-269490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-269490/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-269490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 21:07:27.700731   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:27.700756   70627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 21:07:27.700776   70627 buildroot.go:174] setting up certificates
	I0401 21:07:27.700789   70627 provision.go:84] configureAuth start
	I0401 21:07:27.700807   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetMachineName
	I0401 21:07:27.701088   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:27.703973   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.704286   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.704299   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.704491   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:27.706703   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.707051   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:27.707076   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:27.707203   70627 provision.go:143] copyHostCerts
	I0401 21:07:27.707255   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 21:07:27.707265   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 21:07:27.707328   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 21:07:27.707422   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 21:07:27.707429   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 21:07:27.707453   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 21:07:27.707515   70627 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 21:07:27.707522   70627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 21:07:27.707542   70627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 21:07:27.707603   70627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.kindnet-269490 san=[127.0.0.1 192.168.72.200 kindnet-269490 localhost minikube]
	I0401 21:07:28.041214   70627 provision.go:177] copyRemoteCerts
	I0401 21:07:28.041272   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 21:07:28.041293   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.044440   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.044786   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.044818   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.044953   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.045179   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.045341   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.045494   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.125273   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 21:07:28.152183   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0401 21:07:28.177383   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 21:07:28.201496   70627 provision.go:87] duration metric: took 500.692247ms to configureAuth
	I0401 21:07:28.201523   70627 buildroot.go:189] setting minikube options for container-runtime
	I0401 21:07:28.201720   70627 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:28.201828   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.204278   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.204623   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.204647   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.204776   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.204980   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.205160   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.205299   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.205448   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:28.205669   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:28.205689   70627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 21:07:28.439140   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 21:07:28.439165   70627 main.go:141] libmachine: Checking connection to Docker...
	I0401 21:07:28.439173   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetURL
	I0401 21:07:28.440485   70627 main.go:141] libmachine: (kindnet-269490) DBG | using libvirt version 6000000
	I0401 21:07:28.442490   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.442845   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.442873   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.443006   70627 main.go:141] libmachine: Docker is up and running!
	I0401 21:07:28.443020   70627 main.go:141] libmachine: Reticulating splines...
	I0401 21:07:28.443027   70627 client.go:171] duration metric: took 26.224912939s to LocalClient.Create
	I0401 21:07:28.443053   70627 start.go:167] duration metric: took 26.224971636s to libmachine.API.Create "kindnet-269490"
	I0401 21:07:28.443076   70627 start.go:293] postStartSetup for "kindnet-269490" (driver="kvm2")
	I0401 21:07:28.443090   70627 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 21:07:28.443111   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.443340   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 21:07:28.443361   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.445496   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.445781   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.445819   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.445938   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.446110   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.446250   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.446380   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.527257   70627 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 21:07:28.531876   70627 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 21:07:28.531913   70627 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 21:07:28.531976   70627 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 21:07:28.532079   70627 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 21:07:28.532200   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 21:07:28.542758   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:28.567116   70627 start.go:296] duration metric: took 124.023387ms for postStartSetup
	I0401 21:07:28.567157   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetConfigRaw
	I0401 21:07:28.567744   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:28.570513   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.570890   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.570925   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.571188   70627 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/config.json ...
	I0401 21:07:28.571352   70627 start.go:128] duration metric: took 26.372666304s to createHost
	I0401 21:07:28.571372   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.573625   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.573965   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.573996   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.574106   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.574359   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.574499   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.574645   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.574805   70627 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:28.574999   70627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0401 21:07:28.575009   70627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 21:07:28.675218   70627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743541648.630648618
	
	I0401 21:07:28.675244   70627 fix.go:216] guest clock: 1743541648.630648618
	I0401 21:07:28.675251   70627 fix.go:229] Guest: 2025-04-01 21:07:28.630648618 +0000 UTC Remote: 2025-04-01 21:07:28.571362914 +0000 UTC m=+26.497421115 (delta=59.285704ms)
	I0401 21:07:28.675268   70627 fix.go:200] guest clock delta is within tolerance: 59.285704ms
	I0401 21:07:28.675273   70627 start.go:83] releasing machines lock for "kindnet-269490", held for 26.476652376s
	I0401 21:07:28.675294   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.675584   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:28.678529   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.678972   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.679003   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.679129   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679598   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679812   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:28.679913   70627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 21:07:28.679970   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.680010   70627 ssh_runner.go:195] Run: cat /version.json
	I0401 21:07:28.680030   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:28.682720   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683101   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.683138   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683163   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683249   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.683417   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.683501   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:28.683531   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:28.683603   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.683739   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:28.683788   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.683896   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:28.684046   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:28.684172   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:28.769030   70627 ssh_runner.go:195] Run: systemctl --version
	I0401 21:07:28.791882   70627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 21:07:28.961201   70627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 21:07:28.969654   70627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 21:07:28.969728   70627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 21:07:28.986375   70627 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 21:07:28.986411   70627 start.go:495] detecting cgroup driver to use...
	I0401 21:07:28.986468   70627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 21:07:29.003118   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 21:07:29.017954   70627 docker.go:217] disabling cri-docker service (if available) ...
	I0401 21:07:29.018024   70627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 21:07:29.039725   70627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 21:07:29.056555   70627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 21:07:29.182669   70627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 21:07:29.336854   70627 docker.go:233] disabling docker service ...
	I0401 21:07:29.336911   70627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 21:07:29.354124   70627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 21:07:29.368340   70627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 21:07:29.535858   70627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 21:07:29.694425   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 21:07:29.713503   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 21:07:29.735749   70627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 21:07:29.735818   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.747810   70627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 21:07:29.747881   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.759913   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.777285   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.793765   70627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 21:07:29.806511   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.821740   70627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.845322   70627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:29.860990   70627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 21:07:29.874670   70627 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 21:07:29.874736   70627 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 21:07:29.893635   70627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 21:07:29.908790   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:30.038485   70627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 21:07:30.156804   70627 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 21:07:30.156877   70627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 21:07:30.163177   70627 start.go:563] Will wait 60s for crictl version
	I0401 21:07:30.163270   70627 ssh_runner.go:195] Run: which crictl
	I0401 21:07:30.167977   70627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 21:07:30.229882   70627 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 21:07:30.229963   70627 ssh_runner.go:195] Run: crio --version
	I0401 21:07:30.269347   70627 ssh_runner.go:195] Run: crio --version
	I0401 21:07:30.302624   70627 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 21:07:28.677559   72096 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0401 21:07:28.677751   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:28.677822   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:28.694049   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0401 21:07:28.694546   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:28.695167   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:07:28.695195   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:28.695565   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:28.695779   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:28.695920   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:28.696100   72096 start.go:159] libmachine.API.Create for "custom-flannel-269490" (driver="kvm2")
	I0401 21:07:28.696127   72096 client.go:168] LocalClient.Create starting
	I0401 21:07:28.696164   72096 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem
	I0401 21:07:28.696199   72096 main.go:141] libmachine: Decoding PEM data...
	I0401 21:07:28.696217   72096 main.go:141] libmachine: Parsing certificate...
	I0401 21:07:28.696268   72096 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem
	I0401 21:07:28.696301   72096 main.go:141] libmachine: Decoding PEM data...
	I0401 21:07:28.696318   72096 main.go:141] libmachine: Parsing certificate...
	I0401 21:07:28.696344   72096 main.go:141] libmachine: Running pre-create checks...
	I0401 21:07:28.696357   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .PreCreateCheck
	I0401 21:07:28.696663   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:28.697088   72096 main.go:141] libmachine: Creating machine...
	I0401 21:07:28.697104   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Create
	I0401 21:07:28.697278   72096 main.go:141] libmachine: (custom-flannel-269490) creating KVM machine...
	I0401 21:07:28.697294   72096 main.go:141] libmachine: (custom-flannel-269490) creating network...
	I0401 21:07:28.698499   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found existing default KVM network
	I0401 21:07:28.699714   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:28.699559   72184 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201380}
	I0401 21:07:28.699734   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | created network xml: 
	I0401 21:07:28.699747   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | <network>
	I0401 21:07:28.699756   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <name>mk-custom-flannel-269490</name>
	I0401 21:07:28.699772   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <dns enable='no'/>
	I0401 21:07:28.699783   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   
	I0401 21:07:28.699791   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 21:07:28.699801   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |     <dhcp>
	I0401 21:07:28.699814   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 21:07:28.699824   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |     </dhcp>
	I0401 21:07:28.699834   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   </ip>
	I0401 21:07:28.699842   72096 main.go:141] libmachine: (custom-flannel-269490) DBG |   
	I0401 21:07:28.699856   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | </network>
	I0401 21:07:28.699866   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | 
	I0401 21:07:28.705387   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | trying to create private KVM network mk-custom-flannel-269490 192.168.39.0/24...
	I0401 21:07:28.781748   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | private KVM network mk-custom-flannel-269490 192.168.39.0/24 created
	I0401 21:07:28.781785   72096 main.go:141] libmachine: (custom-flannel-269490) setting up store path in /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 ...
	I0401 21:07:28.781803   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:28.781711   72184 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:28.781825   72096 main.go:141] libmachine: (custom-flannel-269490) building disk image from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 21:07:28.781872   72096 main.go:141] libmachine: (custom-flannel-269490) Downloading /home/jenkins/minikube-integration/20506-9129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0401 21:07:29.058600   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.058491   72184 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa...
	I0401 21:07:29.284720   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.284560   72184 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/custom-flannel-269490.rawdisk...
	I0401 21:07:29.284762   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Writing magic tar header
	I0401 21:07:29.284781   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Writing SSH key tar header
	I0401 21:07:29.284790   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:29.284674   72184 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 ...
	I0401 21:07:29.284799   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490
	I0401 21:07:29.284806   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube/machines
	I0401 21:07:29.284819   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 21:07:29.284829   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20506-9129
	I0401 21:07:29.284854   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0401 21:07:29.284877   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490 (perms=drwx------)
	I0401 21:07:29.284897   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home/jenkins
	I0401 21:07:29.284911   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | checking permissions on dir: /home
	I0401 21:07:29.284916   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | skipping /home - not owner
	I0401 21:07:29.284927   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube/machines (perms=drwxr-xr-x)
	I0401 21:07:29.284936   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129/.minikube (perms=drwxr-xr-x)
	I0401 21:07:29.284947   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration/20506-9129 (perms=drwxrwxr-x)
	I0401 21:07:29.284953   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 21:07:29.284961   72096 main.go:141] libmachine: (custom-flannel-269490) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 21:07:29.284970   72096 main.go:141] libmachine: (custom-flannel-269490) creating domain...
	I0401 21:07:29.285984   72096 main.go:141] libmachine: (custom-flannel-269490) define libvirt domain using xml: 
	I0401 21:07:29.286030   72096 main.go:141] libmachine: (custom-flannel-269490) <domain type='kvm'>
	I0401 21:07:29.286042   72096 main.go:141] libmachine: (custom-flannel-269490)   <name>custom-flannel-269490</name>
	I0401 21:07:29.286047   72096 main.go:141] libmachine: (custom-flannel-269490)   <memory unit='MiB'>3072</memory>
	I0401 21:07:29.286087   72096 main.go:141] libmachine: (custom-flannel-269490)   <vcpu>2</vcpu>
	I0401 21:07:29.286134   72096 main.go:141] libmachine: (custom-flannel-269490)   <features>
	I0401 21:07:29.286149   72096 main.go:141] libmachine: (custom-flannel-269490)     <acpi/>
	I0401 21:07:29.286155   72096 main.go:141] libmachine: (custom-flannel-269490)     <apic/>
	I0401 21:07:29.286176   72096 main.go:141] libmachine: (custom-flannel-269490)     <pae/>
	I0401 21:07:29.286193   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286204   72096 main.go:141] libmachine: (custom-flannel-269490)   </features>
	I0401 21:07:29.286232   72096 main.go:141] libmachine: (custom-flannel-269490)   <cpu mode='host-passthrough'>
	I0401 21:07:29.286253   72096 main.go:141] libmachine: (custom-flannel-269490)   
	I0401 21:07:29.286262   72096 main.go:141] libmachine: (custom-flannel-269490)   </cpu>
	I0401 21:07:29.286271   72096 main.go:141] libmachine: (custom-flannel-269490)   <os>
	I0401 21:07:29.286281   72096 main.go:141] libmachine: (custom-flannel-269490)     <type>hvm</type>
	I0401 21:07:29.286291   72096 main.go:141] libmachine: (custom-flannel-269490)     <boot dev='cdrom'/>
	I0401 21:07:29.286299   72096 main.go:141] libmachine: (custom-flannel-269490)     <boot dev='hd'/>
	I0401 21:07:29.286309   72096 main.go:141] libmachine: (custom-flannel-269490)     <bootmenu enable='no'/>
	I0401 21:07:29.286318   72096 main.go:141] libmachine: (custom-flannel-269490)   </os>
	I0401 21:07:29.286327   72096 main.go:141] libmachine: (custom-flannel-269490)   <devices>
	I0401 21:07:29.286336   72096 main.go:141] libmachine: (custom-flannel-269490)     <disk type='file' device='cdrom'>
	I0401 21:07:29.286354   72096 main.go:141] libmachine: (custom-flannel-269490)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/boot2docker.iso'/>
	I0401 21:07:29.286364   72096 main.go:141] libmachine: (custom-flannel-269490)       <target dev='hdc' bus='scsi'/>
	I0401 21:07:29.286374   72096 main.go:141] libmachine: (custom-flannel-269490)       <readonly/>
	I0401 21:07:29.286383   72096 main.go:141] libmachine: (custom-flannel-269490)     </disk>
	I0401 21:07:29.286393   72096 main.go:141] libmachine: (custom-flannel-269490)     <disk type='file' device='disk'>
	I0401 21:07:29.286403   72096 main.go:141] libmachine: (custom-flannel-269490)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 21:07:29.286417   72096 main.go:141] libmachine: (custom-flannel-269490)       <source file='/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/custom-flannel-269490.rawdisk'/>
	I0401 21:07:29.286425   72096 main.go:141] libmachine: (custom-flannel-269490)       <target dev='hda' bus='virtio'/>
	I0401 21:07:29.286439   72096 main.go:141] libmachine: (custom-flannel-269490)     </disk>
	I0401 21:07:29.286454   72096 main.go:141] libmachine: (custom-flannel-269490)     <interface type='network'>
	I0401 21:07:29.286466   72096 main.go:141] libmachine: (custom-flannel-269490)       <source network='mk-custom-flannel-269490'/>
	I0401 21:07:29.286478   72096 main.go:141] libmachine: (custom-flannel-269490)       <model type='virtio'/>
	I0401 21:07:29.286488   72096 main.go:141] libmachine: (custom-flannel-269490)     </interface>
	I0401 21:07:29.286497   72096 main.go:141] libmachine: (custom-flannel-269490)     <interface type='network'>
	I0401 21:07:29.286504   72096 main.go:141] libmachine: (custom-flannel-269490)       <source network='default'/>
	I0401 21:07:29.286528   72096 main.go:141] libmachine: (custom-flannel-269490)       <model type='virtio'/>
	I0401 21:07:29.286549   72096 main.go:141] libmachine: (custom-flannel-269490)     </interface>
	I0401 21:07:29.286563   72096 main.go:141] libmachine: (custom-flannel-269490)     <serial type='pty'>
	I0401 21:07:29.286573   72096 main.go:141] libmachine: (custom-flannel-269490)       <target port='0'/>
	I0401 21:07:29.286581   72096 main.go:141] libmachine: (custom-flannel-269490)     </serial>
	I0401 21:07:29.286603   72096 main.go:141] libmachine: (custom-flannel-269490)     <console type='pty'>
	I0401 21:07:29.286615   72096 main.go:141] libmachine: (custom-flannel-269490)       <target type='serial' port='0'/>
	I0401 21:07:29.286628   72096 main.go:141] libmachine: (custom-flannel-269490)     </console>
	I0401 21:07:29.286640   72096 main.go:141] libmachine: (custom-flannel-269490)     <rng model='virtio'>
	I0401 21:07:29.286652   72096 main.go:141] libmachine: (custom-flannel-269490)       <backend model='random'>/dev/random</backend>
	I0401 21:07:29.286663   72096 main.go:141] libmachine: (custom-flannel-269490)     </rng>
	I0401 21:07:29.286669   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286680   72096 main.go:141] libmachine: (custom-flannel-269490)     
	I0401 21:07:29.286686   72096 main.go:141] libmachine: (custom-flannel-269490)   </devices>
	I0401 21:07:29.286706   72096 main.go:141] libmachine: (custom-flannel-269490) </domain>
	I0401 21:07:29.286723   72096 main.go:141] libmachine: (custom-flannel-269490) 
	I0401 21:07:29.290865   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:8b:9d:ef in network default
	I0401 21:07:29.291399   72096 main.go:141] libmachine: (custom-flannel-269490) starting domain...
	I0401 21:07:29.291422   72096 main.go:141] libmachine: (custom-flannel-269490) ensuring networks are active...
	I0401 21:07:29.291433   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:29.291982   72096 main.go:141] libmachine: (custom-flannel-269490) Ensuring network default is active
	I0401 21:07:29.292311   72096 main.go:141] libmachine: (custom-flannel-269490) Ensuring network mk-custom-flannel-269490 is active
	I0401 21:07:29.292850   72096 main.go:141] libmachine: (custom-flannel-269490) getting domain XML...
	I0401 21:07:29.293579   72096 main.go:141] libmachine: (custom-flannel-269490) creating domain...
	I0401 21:07:30.303928   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetIP
	I0401 21:07:30.307187   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:30.307572   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:30.307599   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:30.307851   70627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 21:07:30.312717   70627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:30.329656   70627 kubeadm.go:883] updating cluster {Name:kindnet-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 21:07:30.329769   70627 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:30.329840   70627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:30.373808   70627 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 21:07:30.373892   70627 ssh_runner.go:195] Run: which lz4
	I0401 21:07:30.379933   70627 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 21:07:30.385901   70627 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 21:07:30.385939   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0401 21:07:32.049587   70627 crio.go:462] duration metric: took 1.669696993s to copy over tarball
	I0401 21:07:32.049659   70627 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 21:07:29.263832   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:31.264708   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:33.769708   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:30.943467   72096 main.go:141] libmachine: (custom-flannel-269490) waiting for IP...
	I0401 21:07:30.944501   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:30.945048   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:30.945154   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:30.945061   72184 retry.go:31] will retry after 194.088864ms: waiting for domain to come up
	I0401 21:07:31.141228   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.142003   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.142032   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.141987   72184 retry.go:31] will retry after 322.526555ms: waiting for domain to come up
	I0401 21:07:31.466493   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.467103   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.467136   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.467085   72184 retry.go:31] will retry after 362.158292ms: waiting for domain to come up
	I0401 21:07:31.830645   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:31.831272   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:31.831294   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:31.831181   72184 retry.go:31] will retry after 507.010873ms: waiting for domain to come up
	I0401 21:07:32.340049   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:32.340688   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:32.340721   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:32.340672   72184 retry.go:31] will retry after 549.764239ms: waiting for domain to come up
	I0401 21:07:32.892498   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:32.893048   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:32.893109   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:32.893038   72184 retry.go:31] will retry after 893.566953ms: waiting for domain to come up
	I0401 21:07:33.788648   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:33.789231   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:33.789313   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:33.789217   72184 retry.go:31] will retry after 1.073160889s: waiting for domain to come up
	I0401 21:07:34.863948   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:34.864715   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:34.864744   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:34.864686   72184 retry.go:31] will retry after 1.137676024s: waiting for domain to come up
	I0401 21:07:34.855116   70627 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.805424084s)
	I0401 21:07:34.855163   70627 crio.go:469] duration metric: took 2.805546758s to extract the tarball
	I0401 21:07:34.855174   70627 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 21:07:34.908880   70627 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:34.967377   70627 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 21:07:34.967406   70627 cache_images.go:84] Images are preloaded, skipping loading
	I0401 21:07:34.967416   70627 kubeadm.go:934] updating node { 192.168.72.200 8443 v1.32.2 crio true true} ...
	I0401 21:07:34.967548   70627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-269490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0401 21:07:34.967631   70627 ssh_runner.go:195] Run: crio config
	I0401 21:07:35.020670   70627 cni.go:84] Creating CNI manager for "kindnet"
	I0401 21:07:35.020696   70627 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 21:07:35.020718   70627 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-269490 NodeName:kindnet-269490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 21:07:35.020839   70627 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-269490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 21:07:35.020907   70627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 21:07:35.030866   70627 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 21:07:35.030991   70627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 21:07:35.040113   70627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0401 21:07:35.058011   70627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 21:07:35.078574   70627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0401 21:07:35.098427   70627 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0401 21:07:35.103690   70627 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:35.120443   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:35.277665   70627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:07:35.301275   70627 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490 for IP: 192.168.72.200
	I0401 21:07:35.301301   70627 certs.go:194] generating shared ca certs ...
	I0401 21:07:35.301323   70627 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:35.301486   70627 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 21:07:35.301544   70627 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 21:07:35.301556   70627 certs.go:256] generating profile certs ...
	I0401 21:07:35.301622   70627 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key
	I0401 21:07:35.301645   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt with IP's: []
	I0401 21:07:36.000768   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt ...
	I0401 21:07:36.000802   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.crt: {Name:mk04a99f27c2f056a29fa36354c47c3222966cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.001003   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key ...
	I0401 21:07:36.001020   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/client.key: {Name:mk5444fb90b1ff0a0c80a111598fb1ccc67e25fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.001135   70627 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5
	I0401 21:07:36.001155   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0401 21:07:36.090552   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 ...
	I0401 21:07:36.090588   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5: {Name:mk69f7dd622b7c419828c04f6ea380483c101940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.090767   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5 ...
	I0401 21:07:36.090785   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5: {Name:mkeaf32ff9453aef850a761332e7f9bb6dfc5cad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.090885   70627 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt.7dbfb8d5 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt
	I0401 21:07:36.090977   70627 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key.7dbfb8d5 -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key
	I0401 21:07:36.091055   70627 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key
	I0401 21:07:36.091075   70627 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt with IP's: []
	I0401 21:07:36.356603   70627 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt ...
	I0401 21:07:36.356633   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt: {Name:mk053c71ff066a03a7f917f8347cef707651c156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.356813   70627 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key ...
	I0401 21:07:36.356831   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key: {Name:mk7c401e3c137a1d374bd407e8454dc99cff1e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:36.357017   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 21:07:36.357068   70627 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 21:07:36.357083   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 21:07:36.357115   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 21:07:36.357170   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 21:07:36.357210   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 21:07:36.357269   70627 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:36.357829   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 21:07:36.391336   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 21:07:36.425083   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 21:07:36.457892   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 21:07:36.492019   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 21:07:36.522365   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 21:07:36.547296   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 21:07:36.572536   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/kindnet-269490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 21:07:36.598460   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 21:07:36.628401   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 21:07:36.658521   70627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 21:07:36.689061   70627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 21:07:36.714997   70627 ssh_runner.go:195] Run: openssl version
	I0401 21:07:36.723421   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 21:07:36.739419   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.745825   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.745888   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 21:07:36.754721   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 21:07:36.771512   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 21:07:36.789799   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.796727   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.796800   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 21:07:36.810295   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 21:07:36.824556   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 21:07:36.839972   70627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.847132   70627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.847202   70627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:07:36.854129   70627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 21:07:36.868264   70627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 21:07:36.873005   70627 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 21:07:36.873058   70627 kubeadm.go:392] StartCluster: {Name:kindnet-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-269490 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:07:36.873147   70627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 21:07:36.873204   70627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:07:36.917357   70627 cri.go:89] found id: ""
	I0401 21:07:36.917434   70627 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 21:07:36.928432   70627 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:07:36.939322   70627 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:07:36.949948   70627 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:07:36.949975   70627 kubeadm.go:157] found existing configuration files:
	
	I0401 21:07:36.950027   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:07:36.959903   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:07:36.959979   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:07:36.970704   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:07:36.980434   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:07:36.980531   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:07:36.994176   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:07:37.007180   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:07:37.007238   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:07:37.017875   70627 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:07:37.028242   70627 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:07:37.028303   70627 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:07:37.038869   70627 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:07:37.095127   70627 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 21:07:37.095194   70627 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:07:37.220077   70627 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:07:37.220198   70627 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:07:37.220346   70627 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 21:07:37.232593   70627 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:07:38.460012   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:37.363938   70627 out.go:235]   - Generating certificates and keys ...
	I0401 21:07:37.364091   70627 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:07:37.364186   70627 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:07:37.410466   70627 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 21:07:37.746651   70627 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 21:07:38.065662   70627 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 21:07:38.284383   70627 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 21:07:38.672088   70627 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 21:07:38.672441   70627 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-269490 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0401 21:07:39.029897   70627 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 21:07:39.030235   70627 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-269490 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0401 21:07:39.197549   70627 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 21:07:39.291766   70627 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 21:07:39.461667   70627 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 21:07:39.461915   70627 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:07:39.598656   70627 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:07:39.836507   70627 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 21:07:40.087046   70627 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:07:40.167057   70627 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:07:40.493658   70627 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:07:40.494572   70627 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:07:40.497003   70627 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:07:36.004129   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:36.004736   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:36.004770   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:36.004691   72184 retry.go:31] will retry after 1.398747795s: waiting for domain to come up
	I0401 21:07:37.404982   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:37.405521   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:37.405562   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:37.405494   72184 retry.go:31] will retry after 1.806073182s: waiting for domain to come up
	I0401 21:07:39.213342   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:39.213908   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:39.213933   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:39.213880   72184 retry.go:31] will retry after 2.187010311s: waiting for domain to come up
	I0401 21:07:40.498949   70627 out.go:235]   - Booting up control plane ...
	I0401 21:07:40.499089   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:07:40.500823   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:07:40.502736   70627 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:07:40.520810   70627 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:07:40.529515   70627 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:07:40.529647   70627 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:07:40.738046   70627 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 21:07:40.738253   70627 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 21:07:41.738936   70627 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001264949s
	I0401 21:07:41.739064   70627 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 21:07:40.766840   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:42.802475   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:41.402690   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:41.403302   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:41.403328   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:41.403246   72184 retry.go:31] will retry after 2.956512585s: waiting for domain to come up
	I0401 21:07:44.361436   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:44.362043   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:44.362067   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:44.362014   72184 retry.go:31] will retry after 3.563399146s: waiting for domain to come up
	I0401 21:07:47.241056   70627 kubeadm.go:310] [api-check] The API server is healthy after 5.503493954s
	I0401 21:07:47.253704   70627 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 21:07:47.270641   70627 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 21:07:47.300541   70627 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 21:07:47.300816   70627 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-269490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 21:07:47.320561   70627 kubeadm.go:310] [bootstrap-token] Using token: xu4lw3.orewvhbjfn5oas79
	I0401 21:07:47.322197   70627 out.go:235]   - Configuring RBAC rules ...
	I0401 21:07:47.322340   70627 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 21:07:47.327157   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 21:07:47.334751   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 21:07:47.338556   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 21:07:47.342546   70627 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 21:07:47.349586   70627 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 21:07:47.650929   70627 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 21:07:48.074376   70627 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 21:07:48.652551   70627 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 21:07:48.652572   70627 kubeadm.go:310] 
	I0401 21:07:48.652631   70627 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 21:07:48.652637   70627 kubeadm.go:310] 
	I0401 21:07:48.652746   70627 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 21:07:48.652757   70627 kubeadm.go:310] 
	I0401 21:07:48.652792   70627 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 21:07:48.652887   70627 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 21:07:48.652979   70627 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 21:07:48.652989   70627 kubeadm.go:310] 
	I0401 21:07:48.653048   70627 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 21:07:48.653063   70627 kubeadm.go:310] 
	I0401 21:07:48.653137   70627 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 21:07:48.653146   70627 kubeadm.go:310] 
	I0401 21:07:48.653225   70627 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 21:07:48.653312   70627 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 21:07:48.653407   70627 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 21:07:48.653421   70627 kubeadm.go:310] 
	I0401 21:07:48.653547   70627 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 21:07:48.653624   70627 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 21:07:48.653630   70627 kubeadm.go:310] 
	I0401 21:07:48.653714   70627 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xu4lw3.orewvhbjfn5oas79 \
	I0401 21:07:48.653861   70627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 \
	I0401 21:07:48.653901   70627 kubeadm.go:310] 	--control-plane 
	I0401 21:07:48.653911   70627 kubeadm.go:310] 
	I0401 21:07:48.653996   70627 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 21:07:48.654008   70627 kubeadm.go:310] 
	I0401 21:07:48.654074   70627 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xu4lw3.orewvhbjfn5oas79 \
	I0401 21:07:48.654207   70627 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 
	I0401 21:07:48.654854   70627 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:07:48.654879   70627 cni.go:84] Creating CNI manager for "kindnet"
	I0401 21:07:48.656486   70627 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 21:07:45.262936   68904 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:46.762301   68904 pod_ready.go:93] pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:46.762325   68904 pod_ready.go:82] duration metric: took 19.505245826s for pod "calico-kube-controllers-77969b7d87-64swg" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:46.762339   68904 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-8lpnw" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:48.770589   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:47.927525   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:47.928071   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find current IP address of domain custom-flannel-269490 in network mk-custom-flannel-269490
	I0401 21:07:47.928097   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | I0401 21:07:47.928026   72184 retry.go:31] will retry after 4.622496999s: waiting for domain to come up
	I0401 21:07:48.657874   70627 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 21:07:48.663855   70627 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 21:07:48.663882   70627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 21:07:48.684916   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 21:07:48.983530   70627 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 21:07:48.983634   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:48.983651   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-269490 minikube.k8s.io/updated_at=2025_04_01T21_07_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=kindnet-269490 minikube.k8s.io/primary=true
	I0401 21:07:49.169988   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:49.170019   70627 ops.go:34] apiserver oom_adj: -16
	I0401 21:07:49.670692   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:50.170288   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:50.670668   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.170790   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.670642   70627 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:07:51.775301   70627 kubeadm.go:1113] duration metric: took 2.791727937s to wait for elevateKubeSystemPrivileges
	I0401 21:07:51.775340   70627 kubeadm.go:394] duration metric: took 14.902284629s to StartCluster
	I0401 21:07:51.775359   70627 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:51.775433   70627 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:07:51.776414   70627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:07:51.776667   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 21:07:51.776684   70627 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 21:07:51.776663   70627 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:07:51.776751   70627 addons.go:69] Setting storage-provisioner=true in profile "kindnet-269490"
	I0401 21:07:51.776767   70627 addons.go:238] Setting addon storage-provisioner=true in "kindnet-269490"
	I0401 21:07:51.776791   70627 host.go:66] Checking if "kindnet-269490" exists ...
	I0401 21:07:51.776801   70627 addons.go:69] Setting default-storageclass=true in profile "kindnet-269490"
	I0401 21:07:51.776821   70627 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-269490"
	I0401 21:07:51.776876   70627 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:51.777230   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.777253   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.777275   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.777285   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.779535   70627 out.go:177] * Verifying Kubernetes components...
	I0401 21:07:51.780894   70627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:51.792573   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38081
	I0401 21:07:51.792618   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0401 21:07:51.793016   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.793065   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.793480   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.793504   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.793657   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.793680   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.794003   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.794035   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.794177   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.794522   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.794562   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.797467   70627 addons.go:238] Setting addon default-storageclass=true in "kindnet-269490"
	I0401 21:07:51.797509   70627 host.go:66] Checking if "kindnet-269490" exists ...
	I0401 21:07:51.797754   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.797788   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.812436   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0401 21:07:51.812455   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0401 21:07:51.812907   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.812960   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.813461   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.813479   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.813561   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.813576   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.813844   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.813927   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.813972   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.814617   70627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:07:51.814659   70627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:07:51.815559   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:51.818041   70627 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:07:51.819387   70627 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:07:51.819404   70627 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 21:07:51.819419   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:51.822051   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.822524   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:51.822549   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.822659   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:51.822828   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:51.822959   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:51.823080   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:51.830521   70627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44007
	I0401 21:07:51.830922   70627 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:07:51.831277   70627 main.go:141] libmachine: Using API Version  1
	I0401 21:07:51.831300   70627 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:07:51.831604   70627 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:07:51.831734   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetState
	I0401 21:07:51.833172   70627 main.go:141] libmachine: (kindnet-269490) Calling .DriverName
	I0401 21:07:51.833423   70627 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 21:07:51.833437   70627 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 21:07:51.833452   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHHostname
	I0401 21:07:51.835920   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.836208   70627 main.go:141] libmachine: (kindnet-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:37:c0", ip: ""} in network mk-kindnet-269490: {Iface:virbr4 ExpiryTime:2025-04-01 22:07:19 +0000 UTC Type:0 Mac:52:54:00:a7:37:c0 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:kindnet-269490 Clientid:01:52:54:00:a7:37:c0}
	I0401 21:07:51.836233   70627 main.go:141] libmachine: (kindnet-269490) DBG | domain kindnet-269490 has defined IP address 192.168.72.200 and MAC address 52:54:00:a7:37:c0 in network mk-kindnet-269490
	I0401 21:07:51.836310   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHPort
	I0401 21:07:51.836491   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHKeyPath
	I0401 21:07:51.836611   70627 main.go:141] libmachine: (kindnet-269490) Calling .GetSSHUsername
	I0401 21:07:51.836740   70627 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/kindnet-269490/id_rsa Username:docker}
	I0401 21:07:51.962702   70627 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 21:07:51.987403   70627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:07:52.104000   70627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 21:07:52.189058   70627 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:07:52.363088   70627 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0401 21:07:52.364340   70627 node_ready.go:35] waiting up to 15m0s for node "kindnet-269490" to be "Ready" ...
	I0401 21:07:52.440087   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.440110   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.440411   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.440428   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.440442   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.440451   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.440672   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.440687   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.440718   70627 main.go:141] libmachine: (kindnet-269490) DBG | Closing plugin on server side
	I0401 21:07:52.497451   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.497484   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.497812   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.497831   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.884016   70627 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-269490" context rescaled to 1 replicas
	I0401 21:07:52.961084   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.961107   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.961382   70627 main.go:141] libmachine: (kindnet-269490) DBG | Closing plugin on server side
	I0401 21:07:52.961424   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.961437   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.961455   70627 main.go:141] libmachine: Making call to close driver server
	I0401 21:07:52.961466   70627 main.go:141] libmachine: (kindnet-269490) Calling .Close
	I0401 21:07:52.961684   70627 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:07:52.961700   70627 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:07:52.963918   70627 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 21:07:51.268101   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:53.269079   68904 pod_ready.go:103] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"False"
	I0401 21:07:53.768113   68904 pod_ready.go:93] pod "calico-node-8lpnw" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.768135   68904 pod_ready.go:82] duration metric: took 7.005790357s for pod "calico-node-8lpnw" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.768143   68904 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.772369   68904 pod_ready.go:93] pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.772394   68904 pod_ready.go:82] duration metric: took 4.243794ms for pod "coredns-668d6bf9bc-mn944" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.772406   68904 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.777208   68904 pod_ready.go:93] pod "etcd-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.777228   68904 pod_ready.go:82] duration metric: took 4.815519ms for pod "etcd-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.777237   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.780965   68904 pod_ready.go:93] pod "kube-apiserver-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.780986   68904 pod_ready.go:82] duration metric: took 3.742662ms for pod "kube-apiserver-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.780997   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.785450   68904 pod_ready.go:93] pod "kube-controller-manager-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:53.785473   68904 pod_ready.go:82] duration metric: took 4.467871ms for pod "kube-controller-manager-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:53.785484   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-clkkm" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.166123   68904 pod_ready.go:93] pod "kube-proxy-clkkm" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:54.166149   68904 pod_ready.go:82] duration metric: took 380.656026ms for pod "kube-proxy-clkkm" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.166161   68904 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.567079   68904 pod_ready.go:93] pod "kube-scheduler-calico-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:07:54.567105   68904 pod_ready.go:82] duration metric: took 400.93599ms for pod "kube-scheduler-calico-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:07:54.567118   68904 pod_ready.go:39] duration metric: took 27.313232071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:07:54.567135   68904 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:07:54.567190   68904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:07:54.583839   68904 api_server.go:72] duration metric: took 36.214254974s to wait for apiserver process to appear ...
	I0401 21:07:54.583866   68904 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:07:54.583887   68904 api_server.go:253] Checking apiserver healthz at https://192.168.61.102:8443/healthz ...
	I0401 21:07:54.588495   68904 api_server.go:279] https://192.168.61.102:8443/healthz returned 200:
	ok
	I0401 21:07:54.589645   68904 api_server.go:141] control plane version: v1.32.2
	I0401 21:07:54.589671   68904 api_server.go:131] duration metric: took 5.795827ms to wait for apiserver health ...
	I0401 21:07:54.589681   68904 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:07:54.767449   68904 system_pods.go:59] 9 kube-system pods found
	I0401 21:07:54.767492   68904 system_pods.go:61] "calico-kube-controllers-77969b7d87-64swg" [34a618ff-c7cd-447e-9ef9-32357bcf9e42] Running
	I0401 21:07:54.767499   68904 system_pods.go:61] "calico-node-8lpnw" [75dee764-9af1-4f9d-8248-8f333c9b3a75] Running
	I0401 21:07:54.767503   68904 system_pods.go:61] "coredns-668d6bf9bc-mn944" [fb12f605-c79b-4cdf-92c3-2a3bf9319b9f] Running
	I0401 21:07:54.767507   68904 system_pods.go:61] "etcd-calico-269490" [60128f13-ff1b-43d3-9577-30032cbc1224] Running
	I0401 21:07:54.767510   68904 system_pods.go:61] "kube-apiserver-calico-269490" [7bc4e2df-17c3-4c16-8fc4-6cbd4d194757] Running
	I0401 21:07:54.767513   68904 system_pods.go:61] "kube-controller-manager-calico-269490" [bada65a1-db90-4fe8-b3da-f55647a2a5f5] Running
	I0401 21:07:54.767516   68904 system_pods.go:61] "kube-proxy-clkkm" [20def08e-d6ad-4685-91cf-658019584c13] Running
	I0401 21:07:54.767519   68904 system_pods.go:61] "kube-scheduler-calico-269490" [02f99ab0-d476-4e0a-b12b-b62d8fded21c] Running
	I0401 21:07:54.767522   68904 system_pods.go:61] "storage-provisioner" [dea0b01b-b565-4ea8-b740-28125b3c579c] Running
	I0401 21:07:54.767528   68904 system_pods.go:74] duration metric: took 177.841641ms to wait for pod list to return data ...
	I0401 21:07:54.767537   68904 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:07:54.967440   68904 default_sa.go:45] found service account: "default"
	I0401 21:07:54.967473   68904 default_sa.go:55] duration metric: took 199.928997ms for default service account to be created ...
	I0401 21:07:54.967485   68904 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:07:55.168431   68904 system_pods.go:86] 9 kube-system pods found
	I0401 21:07:55.168456   68904 system_pods.go:89] "calico-kube-controllers-77969b7d87-64swg" [34a618ff-c7cd-447e-9ef9-32357bcf9e42] Running
	I0401 21:07:55.168462   68904 system_pods.go:89] "calico-node-8lpnw" [75dee764-9af1-4f9d-8248-8f333c9b3a75] Running
	I0401 21:07:55.168466   68904 system_pods.go:89] "coredns-668d6bf9bc-mn944" [fb12f605-c79b-4cdf-92c3-2a3bf9319b9f] Running
	I0401 21:07:55.168469   68904 system_pods.go:89] "etcd-calico-269490" [60128f13-ff1b-43d3-9577-30032cbc1224] Running
	I0401 21:07:55.168472   68904 system_pods.go:89] "kube-apiserver-calico-269490" [7bc4e2df-17c3-4c16-8fc4-6cbd4d194757] Running
	I0401 21:07:55.168475   68904 system_pods.go:89] "kube-controller-manager-calico-269490" [bada65a1-db90-4fe8-b3da-f55647a2a5f5] Running
	I0401 21:07:55.168478   68904 system_pods.go:89] "kube-proxy-clkkm" [20def08e-d6ad-4685-91cf-658019584c13] Running
	I0401 21:07:55.168481   68904 system_pods.go:89] "kube-scheduler-calico-269490" [02f99ab0-d476-4e0a-b12b-b62d8fded21c] Running
	I0401 21:07:55.168484   68904 system_pods.go:89] "storage-provisioner" [dea0b01b-b565-4ea8-b740-28125b3c579c] Running
	I0401 21:07:55.168490   68904 system_pods.go:126] duration metric: took 200.999651ms to wait for k8s-apps to be running ...
	I0401 21:07:55.168499   68904 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:07:55.168548   68904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:07:55.186472   68904 system_svc.go:56] duration metric: took 17.963992ms WaitForService to wait for kubelet
	I0401 21:07:55.186500   68904 kubeadm.go:582] duration metric: took 36.816918566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:07:55.186519   68904 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:07:55.366862   68904 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:07:55.366898   68904 node_conditions.go:123] node cpu capacity is 2
	I0401 21:07:55.366915   68904 node_conditions.go:105] duration metric: took 180.387995ms to run NodePressure ...
	I0401 21:07:55.366931   68904 start.go:241] waiting for startup goroutines ...
	I0401 21:07:55.366942   68904 start.go:246] waiting for cluster config update ...
	I0401 21:07:55.366957   68904 start.go:255] writing updated cluster config ...
	I0401 21:07:55.367292   68904 ssh_runner.go:195] Run: rm -f paused
	I0401 21:07:55.418044   68904 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:07:55.419536   68904 out.go:177] * Done! kubectl is now configured to use "calico-269490" cluster and "default" namespace by default
	I0401 21:07:52.552419   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.553020   72096 main.go:141] libmachine: (custom-flannel-269490) found domain IP: 192.168.39.115
	I0401 21:07:52.553043   72096 main.go:141] libmachine: (custom-flannel-269490) reserving static IP address...
	I0401 21:07:52.553055   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has current primary IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.553551   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | unable to find host DHCP lease matching {name: "custom-flannel-269490", mac: "52:54:00:bc:3c:1b", ip: "192.168.39.115"} in network mk-custom-flannel-269490
	I0401 21:07:52.633446   72096 main.go:141] libmachine: (custom-flannel-269490) reserved static IP address 192.168.39.115 for domain custom-flannel-269490
	I0401 21:07:52.633469   72096 main.go:141] libmachine: (custom-flannel-269490) waiting for SSH...
	I0401 21:07:52.633478   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Getting to WaitForSSH function...
	I0401 21:07:52.636801   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.637228   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.637263   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.637457   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Using SSH client type: external
	I0401 21:07:52.637483   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Using SSH private key: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa (-rw-------)
	I0401 21:07:52.637524   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 21:07:52.637538   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | About to run SSH command:
	I0401 21:07:52.637570   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | exit 0
	I0401 21:07:52.767648   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | SSH cmd err, output: <nil>: 
	I0401 21:07:52.767922   72096 main.go:141] libmachine: (custom-flannel-269490) KVM machine creation complete
	I0401 21:07:52.768285   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:52.769401   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:52.769639   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:52.769839   72096 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 21:07:52.769855   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:07:52.771616   72096 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 21:07:52.771628   72096 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 21:07:52.771640   72096 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 21:07:52.771646   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:52.773957   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.774313   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.774339   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.774551   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:52.774732   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.774869   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.775003   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:52.775127   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:52.775341   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:52.775351   72096 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 21:07:52.885967   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:52.885995   72096 main.go:141] libmachine: Detecting the provisioner...
	I0401 21:07:52.886036   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:52.889797   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.890333   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:52.890380   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:52.890594   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:52.890795   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.891024   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:52.891176   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:52.891385   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:52.891599   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:52.891613   72096 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 21:07:52.999399   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 21:07:52.999480   72096 main.go:141] libmachine: found compatible host: buildroot
	I0401 21:07:52.999494   72096 main.go:141] libmachine: Provisioning with buildroot...
	I0401 21:07:52.999506   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:52.999737   72096 buildroot.go:166] provisioning hostname "custom-flannel-269490"
	I0401 21:07:52.999763   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:52.999983   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.002673   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.003040   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.003073   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.003201   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.003383   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.003531   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.003684   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.003853   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.004063   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.004074   72096 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-269490 && echo "custom-flannel-269490" | sudo tee /etc/hostname
	I0401 21:07:53.127662   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-269490
	
	I0401 21:07:53.127688   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.130650   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.131060   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.131088   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.131247   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.131442   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.131605   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.131748   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.131909   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.132149   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.132167   72096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-269490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-269490/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-269490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 21:07:53.247895   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 21:07:53.247927   72096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20506-9129/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-9129/.minikube}
	I0401 21:07:53.247979   72096 buildroot.go:174] setting up certificates
	I0401 21:07:53.247998   72096 provision.go:84] configureAuth start
	I0401 21:07:53.248027   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetMachineName
	I0401 21:07:53.248299   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:53.251231   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.251683   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.251709   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.251871   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.254321   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.254634   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.254653   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.254785   72096 provision.go:143] copyHostCerts
	I0401 21:07:53.254838   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem, removing ...
	I0401 21:07:53.254869   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem
	I0401 21:07:53.254963   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/ca.pem (1078 bytes)
	I0401 21:07:53.255070   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem, removing ...
	I0401 21:07:53.255080   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem
	I0401 21:07:53.255101   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/cert.pem (1123 bytes)
	I0401 21:07:53.255172   72096 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem, removing ...
	I0401 21:07:53.255181   72096 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem
	I0401 21:07:53.255206   72096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-9129/.minikube/key.pem (1675 bytes)
	I0401 21:07:53.255307   72096 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-269490 san=[127.0.0.1 192.168.39.115 custom-flannel-269490 localhost minikube]
	I0401 21:07:53.423568   72096 provision.go:177] copyRemoteCerts
	I0401 21:07:53.423622   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 21:07:53.423644   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.426471   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.426823   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.426852   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.427026   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.427209   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.427437   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.427602   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:53.508573   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 21:07:53.534446   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 21:07:53.561750   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0401 21:07:53.586361   72096 provision.go:87] duration metric: took 338.347084ms to configureAuth
	I0401 21:07:53.586388   72096 buildroot.go:189] setting minikube options for container-runtime
	I0401 21:07:53.586608   72096 config.go:182] Loaded profile config "custom-flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:07:53.586686   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.589262   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.589618   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.589647   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.589793   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.589985   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.590141   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.590283   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.590430   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.590630   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.590647   72096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 21:07:53.833008   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 21:07:53.833038   72096 main.go:141] libmachine: Checking connection to Docker...
	I0401 21:07:53.833049   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetURL
	I0401 21:07:53.834302   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | using libvirt version 6000000
	I0401 21:07:53.836570   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.836875   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.836903   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.837075   72096 main.go:141] libmachine: Docker is up and running!
	I0401 21:07:53.837093   72096 main.go:141] libmachine: Reticulating splines...
	I0401 21:07:53.837101   72096 client.go:171] duration metric: took 25.140961475s to LocalClient.Create
	I0401 21:07:53.837125   72096 start.go:167] duration metric: took 25.141025877s to libmachine.API.Create "custom-flannel-269490"
	I0401 21:07:53.837139   72096 start.go:293] postStartSetup for "custom-flannel-269490" (driver="kvm2")
	I0401 21:07:53.837151   72096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 21:07:53.837182   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:53.837406   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 21:07:53.837430   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.839674   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.839944   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.839977   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.840131   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.840293   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.840438   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.840600   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:53.925709   72096 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 21:07:53.930726   72096 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 21:07:53.930754   72096 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/addons for local assets ...
	I0401 21:07:53.930830   72096 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-9129/.minikube/files for local assets ...
	I0401 21:07:53.930898   72096 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0401 21:07:53.931007   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 21:07:53.941164   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:07:53.967167   72096 start.go:296] duration metric: took 130.01591ms for postStartSetup
	I0401 21:07:53.967217   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetConfigRaw
	I0401 21:07:53.967908   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:53.970732   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.971053   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.971088   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.971318   72096 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/config.json ...
	I0401 21:07:53.971510   72096 start.go:128] duration metric: took 25.295908261s to createHost
	I0401 21:07:53.971537   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:53.973863   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.974196   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:53.974232   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:53.974386   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:53.974599   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.974774   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:53.974910   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:53.975100   72096 main.go:141] libmachine: Using SSH client type: native
	I0401 21:07:53.975291   72096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0401 21:07:53.975302   72096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 21:07:54.083312   72096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1743541674.029447156
	
	I0401 21:07:54.083342   72096 fix.go:216] guest clock: 1743541674.029447156
	I0401 21:07:54.083352   72096 fix.go:229] Guest: 2025-04-01 21:07:54.029447156 +0000 UTC Remote: 2025-04-01 21:07:53.971522792 +0000 UTC m=+33.113971403 (delta=57.924364ms)
	I0401 21:07:54.083375   72096 fix.go:200] guest clock delta is within tolerance: 57.924364ms
	I0401 21:07:54.083382   72096 start.go:83] releasing machines lock for "custom-flannel-269490", held for 25.407944503s
	I0401 21:07:54.083403   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.083645   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:54.086274   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.086622   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.086664   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.086836   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087440   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087609   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:07:54.087702   72096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 21:07:54.087739   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:54.087821   72096 ssh_runner.go:195] Run: cat /version.json
	I0401 21:07:54.087841   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:07:54.090554   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.090879   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.090964   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.090990   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.091165   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:54.091298   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:54.091302   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:54.091344   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:54.091468   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:07:54.091525   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:54.091593   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:07:54.091664   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:54.091714   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:07:54.091847   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:07:54.193585   72096 ssh_runner.go:195] Run: systemctl --version
	I0401 21:07:54.199802   72096 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 21:07:54.362009   72096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 21:07:54.369775   72096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 21:07:54.369842   72096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 21:07:54.392464   72096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 21:07:54.392493   72096 start.go:495] detecting cgroup driver to use...
	I0401 21:07:54.392575   72096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 21:07:54.415229   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 21:07:54.430169   72096 docker.go:217] disabling cri-docker service (if available) ...
	I0401 21:07:54.430260   72096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 21:07:54.446557   72096 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 21:07:54.462441   72096 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 21:07:54.581314   72096 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 21:07:54.782985   72096 docker.go:233] disabling docker service ...
	I0401 21:07:54.783048   72096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 21:07:54.799920   72096 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 21:07:54.817125   72096 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 21:07:54.954170   72096 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 21:07:55.099520   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 21:07:55.125853   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 21:07:55.147184   72096 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 21:07:55.147253   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.158166   72096 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 21:07:55.158264   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.169739   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.180580   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.192009   72096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 21:07:55.202863   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.213770   72096 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.232492   72096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 21:07:55.243279   72096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 21:07:55.252819   72096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 21:07:55.252890   72096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 21:07:55.266009   72096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 21:07:55.276185   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:07:55.393356   72096 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 21:07:55.494039   72096 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 21:07:55.494118   72096 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 21:07:55.499309   72096 start.go:563] Will wait 60s for crictl version
	I0401 21:07:55.499366   72096 ssh_runner.go:195] Run: which crictl
	I0401 21:07:55.503928   72096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 21:07:55.551590   72096 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 21:07:55.551671   72096 ssh_runner.go:195] Run: crio --version
	I0401 21:07:55.584117   72096 ssh_runner.go:195] Run: crio --version
	I0401 21:07:55.615306   72096 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0401 21:07:55.616535   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetIP
	I0401 21:07:55.619254   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:55.619608   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:07:55.619636   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:07:55.619847   72096 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 21:07:55.624474   72096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:55.638014   72096 kubeadm.go:883] updating cluster {Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-f
lannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 21:07:55.638113   72096 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 21:07:55.638154   72096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:55.671768   72096 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 21:07:55.671841   72096 ssh_runner.go:195] Run: which lz4
	I0401 21:07:55.675956   72096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 21:07:55.680087   72096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 21:07:55.680112   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0401 21:07:52.964723   70627 addons.go:514] duration metric: took 1.188041211s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:07:54.369067   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:07:56.867804   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:07:57.258849   72096 crio.go:462] duration metric: took 1.582927832s to copy over tarball
	I0401 21:07:57.258910   72096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 21:07:59.713811   72096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.454879542s)
	I0401 21:07:59.713834   72096 crio.go:469] duration metric: took 2.454960019s to extract the tarball
	I0401 21:07:59.713841   72096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 21:07:59.754131   72096 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 21:07:59.803175   72096 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 21:07:59.803203   72096 cache_images.go:84] Images are preloaded, skipping loading
	I0401 21:07:59.803211   72096 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.32.2 crio true true} ...
	I0401 21:07:59.803435   72096 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-269490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:custom-flannel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0401 21:07:59.803542   72096 ssh_runner.go:195] Run: crio config
	I0401 21:07:59.859211   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:07:59.859254   72096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 21:07:59.859279   72096 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-269490 NodeName:custom-flannel-269490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 21:07:59.859420   72096 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-269490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.115"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 21:07:59.859485   72096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 21:07:59.872776   72096 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 21:07:59.872854   72096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 21:07:59.885208   72096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0401 21:07:59.906315   72096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 21:07:59.925314   72096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2301 bytes)
	I0401 21:07:59.945350   72096 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0401 21:07:59.949720   72096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 21:07:59.963662   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:08:00.089313   72096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:08:00.110067   72096 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490 for IP: 192.168.39.115
	I0401 21:08:00.110106   72096 certs.go:194] generating shared ca certs ...
	I0401 21:08:00.110120   72096 certs.go:226] acquiring lock for ca certs: {Name:mk0c623f4e6ad9759b5056c3a8d35decb04e9dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.110294   72096 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key
	I0401 21:08:00.110353   72096 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key
	I0401 21:08:00.110366   72096 certs.go:256] generating profile certs ...
	I0401 21:08:00.110447   72096 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key
	I0401 21:08:00.110464   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt with IP's: []
	I0401 21:08:00.467453   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt ...
	I0401 21:08:00.467488   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.crt: {Name:mk5fce7bdfd13ea831b9ad59ba060161e466fba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.467673   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key ...
	I0401 21:08:00.467686   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/client.key: {Name:mkd84c13916801a689354e72412e009ab37dbcc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.467762   72096 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe
	I0401 21:08:00.467777   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.115]
	I0401 21:08:00.590635   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe ...
	I0401 21:08:00.590669   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe: {Name:mkda99eea5992b7c522818c8e4285bad25863233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.590826   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe ...
	I0401 21:08:00.590839   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe: {Name:mk9b0cf3137043b92f3b27be430ec53f12f6344f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.590912   72096 certs.go:381] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt.228d5bfe -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt
	I0401 21:08:00.590994   72096 certs.go:385] copying /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key.228d5bfe -> /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key
	I0401 21:08:00.591062   72096 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key
	I0401 21:08:00.591077   72096 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt with IP's: []
	I0401 21:08:00.940635   72096 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt ...
	I0401 21:08:00.940673   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt: {Name:mked6a267559570093b231c1df683bf03eedde35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.940870   72096 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key ...
	I0401 21:08:00.940890   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key: {Name:mke111681e05b7c77b9764da674c41796facd6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:00.941091   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem (1338 bytes)
	W0401 21:08:00.941141   72096 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0401 21:08:00.941157   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 21:08:00.941192   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/ca.pem (1078 bytes)
	I0401 21:08:00.941230   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/cert.pem (1123 bytes)
	I0401 21:08:00.941263   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/certs/key.pem (1675 bytes)
	I0401 21:08:00.941317   72096 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0401 21:08:00.941848   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 21:08:01.021801   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0401 21:08:01.047883   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 21:08:01.076127   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 21:08:01.101880   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 21:08:01.128066   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 21:08:01.155676   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 21:08:01.181194   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/custom-flannel-269490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 21:08:01.208023   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0401 21:08:01.235447   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 21:08:01.263882   72096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-9129/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0401 21:08:01.291788   72096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 21:08:01.311432   72096 ssh_runner.go:195] Run: openssl version
	I0401 21:08:01.317827   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0401 21:08:01.330054   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.335156   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:55 /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.335215   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0401 21:08:01.341534   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0401 21:08:01.353100   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0401 21:08:01.364974   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.370126   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:55 /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.370182   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0401 21:08:01.376077   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 21:08:01.387280   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 21:08:01.398763   72096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.403624   72096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.403672   72096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 21:08:01.409399   72096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 21:08:01.421319   72096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 21:08:01.426390   72096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 21:08:01.426469   72096 kubeadm.go:392] StartCluster: {Name:custom-flannel-269490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:custom-flan
nel-269490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 21:08:01.426539   72096 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 21:08:01.426621   72096 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 21:08:01.483618   72096 cri.go:89] found id: ""
	I0401 21:08:01.483709   72096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 21:08:01.497458   72096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 21:08:01.510064   72096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 21:08:01.525097   72096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 21:08:01.525126   72096 kubeadm.go:157] found existing configuration files:
	
	I0401 21:08:01.525187   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 21:08:01.538475   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 21:08:01.538537   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 21:08:01.549865   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 21:08:01.564435   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 21:08:01.564512   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 21:08:01.577112   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 21:08:01.588654   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 21:08:01.588723   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 21:08:01.600399   72096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 21:08:01.611302   72096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 21:08:01.611382   72096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 21:08:01.626795   72096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 21:08:01.706166   72096 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 21:08:01.706290   72096 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:08:01.816483   72096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:08:01.816607   72096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:08:01.816718   72096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 21:08:01.826517   72096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:07:59.368327   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:01.867707   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:01.944921   72096 out.go:235]   - Generating certificates and keys ...
	I0401 21:08:01.945033   72096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:08:01.945102   72096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:08:01.997637   72096 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 21:08:02.082193   72096 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 21:08:02.370051   72096 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 21:08:02.610131   72096 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 21:08:02.813327   72096 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 21:08:02.813505   72096 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-269490 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0401 21:08:02.959340   72096 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 21:08:02.959508   72096 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-269490 localhost] and IPs [192.168.39.115 127.0.0.1 ::1]
	I0401 21:08:03.064528   72096 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 21:08:03.321464   72096 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 21:08:03.362989   72096 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 21:08:03.363077   72096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:08:03.478482   72096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:08:03.742329   72096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 21:08:03.877782   72096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:08:04.064813   72096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:08:04.137063   72096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:08:04.137482   72096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:08:04.141208   72096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:08:04.143036   72096 out.go:235]   - Booting up control plane ...
	I0401 21:08:04.143157   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:08:04.144620   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:08:04.145423   72096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:08:04.172192   72096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:08:04.183885   72096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:08:04.183985   72096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:08:04.340951   72096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 21:08:04.341118   72096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 21:08:04.842463   72096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.673213ms
	I0401 21:08:04.842565   72096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 21:08:03.867783   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:05.868899   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:10.848073   72096 kubeadm.go:310] [api-check] The API server is healthy after 6.003303805s
	I0401 21:08:10.859890   72096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 21:08:10.875896   72096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 21:08:10.906682   72096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 21:08:10.906981   72096 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-269490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 21:08:10.931670   72096 kubeadm.go:310] [bootstrap-token] Using token: y1rxzx.ol9rd2e05i88tezo
	I0401 21:08:07.870418   70627 node_ready.go:53] node "kindnet-269490" has status "Ready":"False"
	I0401 21:08:09.374853   70627 node_ready.go:49] node "kindnet-269490" has status "Ready":"True"
	I0401 21:08:09.374880   70627 node_ready.go:38] duration metric: took 17.010513164s for node "kindnet-269490" to be "Ready" ...
	I0401 21:08:09.374892   70627 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:09.378622   70627 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.383841   70627 pod_ready.go:93] pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.383869   70627 pod_ready.go:82] duration metric: took 1.005212656s for pod "coredns-668d6bf9bc-fqk9t" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.383881   70627 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.388202   70627 pod_ready.go:93] pod "etcd-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.388230   70627 pod_ready.go:82] duration metric: took 4.341416ms for pod "etcd-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.388246   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.393029   70627 pod_ready.go:93] pod "kube-apiserver-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.393061   70627 pod_ready.go:82] duration metric: took 4.797935ms for pod "kube-apiserver-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.393076   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.397690   70627 pod_ready.go:93] pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.397711   70627 pod_ready.go:82] duration metric: took 4.626561ms for pod "kube-controller-manager-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.397722   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-b5cp4" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.570047   70627 pod_ready.go:93] pod "kube-proxy-b5cp4" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.570070   70627 pod_ready.go:82] duration metric: took 172.341286ms for pod "kube-proxy-b5cp4" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.570080   70627 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.969135   70627 pod_ready.go:93] pod "kube-scheduler-kindnet-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:10.969167   70627 pod_ready.go:82] duration metric: took 399.078827ms for pod "kube-scheduler-kindnet-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:10.969182   70627 pod_ready.go:39] duration metric: took 1.594272558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:10.969200   70627 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:08:10.969260   70627 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:08:10.986425   70627 api_server.go:72] duration metric: took 19.20965796s to wait for apiserver process to appear ...
	I0401 21:08:10.986449   70627 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:08:10.986476   70627 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0401 21:08:10.991890   70627 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0401 21:08:10.993199   70627 api_server.go:141] control plane version: v1.32.2
	I0401 21:08:10.993221   70627 api_server.go:131] duration metric: took 6.765166ms to wait for apiserver health ...
	I0401 21:08:10.993228   70627 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:08:11.169752   70627 system_pods.go:59] 8 kube-system pods found
	I0401 21:08:11.169784   70627 system_pods.go:61] "coredns-668d6bf9bc-fqk9t" [1aa997a2-044b-4f1e-bd5f-eb88acdcd380] Running
	I0401 21:08:11.169789   70627 system_pods.go:61] "etcd-kindnet-269490" [6eb8dc71-efc6-40e9-89db-6947499e653f] Running
	I0401 21:08:11.169793   70627 system_pods.go:61] "kindnet-nqt4k" [77a8572e-36d9-4789-a305-c00c892b67ea] Running
	I0401 21:08:11.169796   70627 system_pods.go:61] "kube-apiserver-kindnet-269490" [35601d6b-2485-45ff-b906-80cd3d73bb50] Running
	I0401 21:08:11.169800   70627 system_pods.go:61] "kube-controller-manager-kindnet-269490" [75f07631-fab7-404a-b309-4ea7d2af791e] Running
	I0401 21:08:11.169803   70627 system_pods.go:61] "kube-proxy-b5cp4" [dce5a6b6-9133-4a63-b683-ffbe803e9481] Running
	I0401 21:08:11.169806   70627 system_pods.go:61] "kube-scheduler-kindnet-269490" [313714c7-ef0d-4991-b38e-7ea5d1815849] Running
	I0401 21:08:11.169808   70627 system_pods.go:61] "storage-provisioner" [39894cc3-b618-4ee1-8a46-7065c914830c] Running
	I0401 21:08:11.169816   70627 system_pods.go:74] duration metric: took 176.581209ms to wait for pod list to return data ...
	I0401 21:08:11.169825   70627 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:08:11.370607   70627 default_sa.go:45] found service account: "default"
	I0401 21:08:11.370635   70627 default_sa.go:55] duration metric: took 200.803332ms for default service account to be created ...
	I0401 21:08:11.370646   70627 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:08:11.570070   70627 system_pods.go:86] 8 kube-system pods found
	I0401 21:08:11.570099   70627 system_pods.go:89] "coredns-668d6bf9bc-fqk9t" [1aa997a2-044b-4f1e-bd5f-eb88acdcd380] Running
	I0401 21:08:11.570105   70627 system_pods.go:89] "etcd-kindnet-269490" [6eb8dc71-efc6-40e9-89db-6947499e653f] Running
	I0401 21:08:11.570109   70627 system_pods.go:89] "kindnet-nqt4k" [77a8572e-36d9-4789-a305-c00c892b67ea] Running
	I0401 21:08:11.570112   70627 system_pods.go:89] "kube-apiserver-kindnet-269490" [35601d6b-2485-45ff-b906-80cd3d73bb50] Running
	I0401 21:08:11.570116   70627 system_pods.go:89] "kube-controller-manager-kindnet-269490" [75f07631-fab7-404a-b309-4ea7d2af791e] Running
	I0401 21:08:11.570118   70627 system_pods.go:89] "kube-proxy-b5cp4" [dce5a6b6-9133-4a63-b683-ffbe803e9481] Running
	I0401 21:08:11.570122   70627 system_pods.go:89] "kube-scheduler-kindnet-269490" [313714c7-ef0d-4991-b38e-7ea5d1815849] Running
	I0401 21:08:11.570125   70627 system_pods.go:89] "storage-provisioner" [39894cc3-b618-4ee1-8a46-7065c914830c] Running
	I0401 21:08:11.570132   70627 system_pods.go:126] duration metric: took 199.479575ms to wait for k8s-apps to be running ...
	I0401 21:08:11.570138   70627 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:08:11.570180   70627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:08:11.587544   70627 system_svc.go:56] duration metric: took 17.395489ms WaitForService to wait for kubelet
	I0401 21:08:11.587581   70627 kubeadm.go:582] duration metric: took 19.810818504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:08:11.587624   70627 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:08:11.769946   70627 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:08:11.769972   70627 node_conditions.go:123] node cpu capacity is 2
	I0401 21:08:11.769983   70627 node_conditions.go:105] duration metric: took 182.353276ms to run NodePressure ...
	I0401 21:08:11.769993   70627 start.go:241] waiting for startup goroutines ...
	I0401 21:08:11.770001   70627 start.go:246] waiting for cluster config update ...
	I0401 21:08:11.770014   70627 start.go:255] writing updated cluster config ...
	I0401 21:08:11.770327   70627 ssh_runner.go:195] Run: rm -f paused
	I0401 21:08:11.821228   70627 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:08:11.823026   70627 out.go:177] * Done! kubectl is now configured to use "kindnet-269490" cluster and "default" namespace by default
	I0401 21:08:10.933219   72096 out.go:235]   - Configuring RBAC rules ...
	I0401 21:08:10.933389   72096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 21:08:10.953572   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 21:08:10.970295   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 21:08:10.974769   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 21:08:10.978152   72096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 21:08:10.982485   72096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 21:08:11.255128   72096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 21:08:11.700130   72096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 21:08:12.254377   72096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 21:08:12.254408   72096 kubeadm.go:310] 
	I0401 21:08:12.254498   72096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 21:08:12.254529   72096 kubeadm.go:310] 
	I0401 21:08:12.254681   72096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 21:08:12.254700   72096 kubeadm.go:310] 
	I0401 21:08:12.254729   72096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 21:08:12.254812   72096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 21:08:12.254885   72096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 21:08:12.254895   72096 kubeadm.go:310] 
	I0401 21:08:12.254989   72096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 21:08:12.254999   72096 kubeadm.go:310] 
	I0401 21:08:12.255069   72096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 21:08:12.255078   72096 kubeadm.go:310] 
	I0401 21:08:12.255148   72096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 21:08:12.255261   72096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 21:08:12.255357   72096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 21:08:12.255368   72096 kubeadm.go:310] 
	I0401 21:08:12.255483   72096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 21:08:12.255610   72096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 21:08:12.255631   72096 kubeadm.go:310] 
	I0401 21:08:12.255741   72096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y1rxzx.ol9rd2e05i88tezo \
	I0401 21:08:12.255881   72096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 \
	I0401 21:08:12.255916   72096 kubeadm.go:310] 	--control-plane 
	I0401 21:08:12.255926   72096 kubeadm.go:310] 
	I0401 21:08:12.256021   72096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 21:08:12.256030   72096 kubeadm.go:310] 
	I0401 21:08:12.256150   72096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y1rxzx.ol9rd2e05i88tezo \
	I0401 21:08:12.256298   72096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62423b8ff17ebf3fa36d8d6f31523e02318938efef17617f484eab44db851c38 
	I0401 21:08:12.257066   72096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 21:08:12.257093   72096 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0401 21:08:12.259236   72096 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0401 21:08:12.260686   72096 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 21:08:12.260745   72096 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0401 21:08:12.267034   72096 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0401 21:08:12.267068   72096 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0401 21:08:12.296900   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 21:08:12.848752   72096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 21:08:12.848860   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:12.848947   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-269490 minikube.k8s.io/updated_at=2025_04_01T21_08_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=custom-flannel-269490 minikube.k8s.io/primary=true
	I0401 21:08:12.877431   72096 ops.go:34] apiserver oom_adj: -16
	I0401 21:08:12.985187   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:13.485414   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:13.985981   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:14.485489   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:14.985825   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:15.485827   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:15.985754   72096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 21:08:16.148701   72096 kubeadm.go:1113] duration metric: took 3.299903142s to wait for elevateKubeSystemPrivileges
	I0401 21:08:16.148749   72096 kubeadm.go:394] duration metric: took 14.722285454s to StartCluster
	I0401 21:08:16.148769   72096 settings.go:142] acquiring lock: {Name:mk730f122b2ca6461d1332a4ce407be8655dd967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:16.148863   72096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 21:08:16.150194   72096 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-9129/kubeconfig: {Name:mkf811d7585652ae33be30f87691fb2de9aa1785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 21:08:16.150504   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 21:08:16.150507   72096 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 21:08:16.150594   72096 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 21:08:16.150716   72096 config.go:182] Loaded profile config "custom-flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 21:08:16.150735   72096 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-269490"
	I0401 21:08:16.150760   72096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-269490"
	I0401 21:08:16.150715   72096 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-269490"
	I0401 21:08:16.150863   72096 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-269490"
	I0401 21:08:16.150890   72096 host.go:66] Checking if "custom-flannel-269490" exists ...
	I0401 21:08:16.151250   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.151283   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.151250   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.151392   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.152288   72096 out.go:177] * Verifying Kubernetes components...
	I0401 21:08:16.153941   72096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 21:08:16.167829   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0401 21:08:16.167856   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0401 21:08:16.168243   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.168391   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.168828   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.168843   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.168868   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.168884   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.169237   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.169245   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.169517   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.169824   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.169861   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.172742   72096 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-269490"
	I0401 21:08:16.172773   72096 host.go:66] Checking if "custom-flannel-269490" exists ...
	I0401 21:08:16.172999   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.173021   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.187721   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0401 21:08:16.188253   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.188750   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.188774   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.189282   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.189445   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.189724   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0401 21:08:16.190201   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.190710   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.190728   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.191093   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.191453   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:08:16.191654   72096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 21:08:16.191689   72096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 21:08:16.192999   72096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 21:08:16.194424   72096 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:08:16.194442   72096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 21:08:16.194461   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:08:16.197511   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.198005   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:08:16.198041   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.198238   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:08:16.198409   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:08:16.198748   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:08:16.198918   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:08:16.207703   72096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0401 21:08:16.208135   72096 main.go:141] libmachine: () Calling .GetVersion
	I0401 21:08:16.208589   72096 main.go:141] libmachine: Using API Version  1
	I0401 21:08:16.208612   72096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 21:08:16.209006   72096 main.go:141] libmachine: () Calling .GetMachineName
	I0401 21:08:16.209189   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetState
	I0401 21:08:16.211107   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .DriverName
	I0401 21:08:16.211344   72096 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 21:08:16.211365   72096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 21:08:16.211385   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHHostname
	I0401 21:08:16.213813   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.214123   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:3c:1b", ip: ""} in network mk-custom-flannel-269490: {Iface:virbr1 ExpiryTime:2025-04-01 22:07:45 +0000 UTC Type:0 Mac:52:54:00:bc:3c:1b Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:custom-flannel-269490 Clientid:01:52:54:00:bc:3c:1b}
	I0401 21:08:16.214151   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | domain custom-flannel-269490 has defined IP address 192.168.39.115 and MAC address 52:54:00:bc:3c:1b in network mk-custom-flannel-269490
	I0401 21:08:16.214296   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHPort
	I0401 21:08:16.214499   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHKeyPath
	I0401 21:08:16.214910   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .GetSSHUsername
	I0401 21:08:16.215227   72096 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/custom-flannel-269490/id_rsa Username:docker}
	I0401 21:08:16.590199   72096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 21:08:16.590208   72096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 21:08:16.643763   72096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 21:08:16.713804   72096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 21:08:17.209943   72096 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 21:08:17.210084   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.210105   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.210495   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.210517   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.210528   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.210536   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.210760   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.210776   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.211295   72096 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-269490" to be "Ready" ...
	I0401 21:08:17.251129   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.251163   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.251515   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.251537   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.251546   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.513636   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.513660   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.515627   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.515656   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.515670   72096 main.go:141] libmachine: Making call to close driver server
	I0401 21:08:17.515670   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.515679   72096 main.go:141] libmachine: (custom-flannel-269490) Calling .Close
	I0401 21:08:17.515935   72096 main.go:141] libmachine: Successfully made call to close driver server
	I0401 21:08:17.515951   72096 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 21:08:17.515959   72096 main.go:141] libmachine: (custom-flannel-269490) DBG | Closing plugin on server side
	I0401 21:08:17.517748   72096 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 21:08:17.519567   72096 addons.go:514] duration metric: took 1.36897309s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 21:08:17.714019   72096 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-269490" context rescaled to 1 replicas
	I0401 21:08:19.214809   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:21.214845   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:23.715394   72096 node_ready.go:53] node "custom-flannel-269490" has status "Ready":"False"
	I0401 21:08:25.752337   72096 node_ready.go:49] node "custom-flannel-269490" has status "Ready":"True"
	I0401 21:08:25.752361   72096 node_ready.go:38] duration metric: took 8.541004401s for node "custom-flannel-269490" to be "Ready" ...
	I0401 21:08:25.752373   72096 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:25.781711   72096 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:27.788318   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:29.789254   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:32.287111   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:34.287266   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:36.288139   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:38.788164   72096 pod_ready.go:103] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"False"
	I0401 21:08:39.288278   72096 pod_ready.go:93] pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.288311   72096 pod_ready.go:82] duration metric: took 13.506568961s for pod "coredns-668d6bf9bc-5mj4j" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.288323   72096 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.293894   72096 pod_ready.go:93] pod "etcd-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.293914   72096 pod_ready.go:82] duration metric: took 5.583334ms for pod "etcd-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.293922   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.299231   72096 pod_ready.go:93] pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.299260   72096 pod_ready.go:82] duration metric: took 5.329864ms for pod "kube-apiserver-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.299273   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.303589   72096 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.303611   72096 pod_ready.go:82] duration metric: took 4.329184ms for pod "kube-controller-manager-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.303626   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7mfxw" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.307588   72096 pod_ready.go:93] pod "kube-proxy-7mfxw" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.307608   72096 pod_ready.go:82] duration metric: took 3.974955ms for pod "kube-proxy-7mfxw" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.307619   72096 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.686205   72096 pod_ready.go:93] pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace has status "Ready":"True"
	I0401 21:08:39.686262   72096 pod_ready.go:82] duration metric: took 378.634734ms for pod "kube-scheduler-custom-flannel-269490" in "kube-system" namespace to be "Ready" ...
	I0401 21:08:39.686278   72096 pod_ready.go:39] duration metric: took 13.933890743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 21:08:39.686295   72096 api_server.go:52] waiting for apiserver process to appear ...
	I0401 21:08:39.686354   72096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 21:08:39.707371   72096 api_server.go:72] duration metric: took 23.556833358s to wait for apiserver process to appear ...
	I0401 21:08:39.707408   72096 api_server.go:88] waiting for apiserver healthz status ...
	I0401 21:08:39.707430   72096 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0401 21:08:39.712196   72096 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0401 21:08:39.713175   72096 api_server.go:141] control plane version: v1.32.2
	I0401 21:08:39.713206   72096 api_server.go:131] duration metric: took 5.790036ms to wait for apiserver health ...
	I0401 21:08:39.713216   72096 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 21:08:39.887726   72096 system_pods.go:59] 7 kube-system pods found
	I0401 21:08:39.887756   72096 system_pods.go:61] "coredns-668d6bf9bc-5mj4j" [36eeaf01-f8b5-4b27-a127-3e8e6fb6fe55] Running
	I0401 21:08:39.887763   72096 system_pods.go:61] "etcd-custom-flannel-269490" [13ff3a81-1ab8-47ea-9773-5d96ece48b42] Running
	I0401 21:08:39.887768   72096 system_pods.go:61] "kube-apiserver-custom-flannel-269490" [6593f2ea-974b-4d95-89ea-5231ae3f8f9a] Running
	I0401 21:08:39.887773   72096 system_pods.go:61] "kube-controller-manager-custom-flannel-269490" [badd65c7-6a1d-4ac6-8e2b-81b0523d520a] Running
	I0401 21:08:39.887777   72096 system_pods.go:61] "kube-proxy-7mfxw" [1b07ba12-0e06-432e-b1ef-6712daa0aceb] Running
	I0401 21:08:39.887786   72096 system_pods.go:61] "kube-scheduler-custom-flannel-269490" [c28fe18b-4d5e-481c-9f77-897e84bdc147] Running
	I0401 21:08:39.887791   72096 system_pods.go:61] "storage-provisioner" [23315522-a502-4852-98ec-9589e819d09c] Running
	I0401 21:08:39.887799   72096 system_pods.go:74] duration metric: took 174.575758ms to wait for pod list to return data ...
	I0401 21:08:39.887809   72096 default_sa.go:34] waiting for default service account to be created ...
	I0401 21:08:40.086898   72096 default_sa.go:45] found service account: "default"
	I0401 21:08:40.086922   72096 default_sa.go:55] duration metric: took 199.10767ms for default service account to be created ...
	I0401 21:08:40.086932   72096 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 21:08:40.287384   72096 system_pods.go:86] 7 kube-system pods found
	I0401 21:08:40.287416   72096 system_pods.go:89] "coredns-668d6bf9bc-5mj4j" [36eeaf01-f8b5-4b27-a127-3e8e6fb6fe55] Running
	I0401 21:08:40.287421   72096 system_pods.go:89] "etcd-custom-flannel-269490" [13ff3a81-1ab8-47ea-9773-5d96ece48b42] Running
	I0401 21:08:40.287425   72096 system_pods.go:89] "kube-apiserver-custom-flannel-269490" [6593f2ea-974b-4d95-89ea-5231ae3f8f9a] Running
	I0401 21:08:40.287429   72096 system_pods.go:89] "kube-controller-manager-custom-flannel-269490" [badd65c7-6a1d-4ac6-8e2b-81b0523d520a] Running
	I0401 21:08:40.287432   72096 system_pods.go:89] "kube-proxy-7mfxw" [1b07ba12-0e06-432e-b1ef-6712daa0aceb] Running
	I0401 21:08:40.287435   72096 system_pods.go:89] "kube-scheduler-custom-flannel-269490" [c28fe18b-4d5e-481c-9f77-897e84bdc147] Running
	I0401 21:08:40.287438   72096 system_pods.go:89] "storage-provisioner" [23315522-a502-4852-98ec-9589e819d09c] Running
	I0401 21:08:40.287443   72096 system_pods.go:126] duration metric: took 200.50653ms to wait for k8s-apps to be running ...
	I0401 21:08:40.287450   72096 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 21:08:40.287503   72096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 21:08:40.303609   72096 system_svc.go:56] duration metric: took 16.150777ms WaitForService to wait for kubelet
	I0401 21:08:40.303639   72096 kubeadm.go:582] duration metric: took 24.153106492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 21:08:40.303665   72096 node_conditions.go:102] verifying NodePressure condition ...
	I0401 21:08:40.486884   72096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 21:08:40.486919   72096 node_conditions.go:123] node cpu capacity is 2
	I0401 21:08:40.486933   72096 node_conditions.go:105] duration metric: took 183.261884ms to run NodePressure ...
	I0401 21:08:40.486946   72096 start.go:241] waiting for startup goroutines ...
	I0401 21:08:40.486955   72096 start.go:246] waiting for cluster config update ...
	I0401 21:08:40.486969   72096 start.go:255] writing updated cluster config ...
	I0401 21:08:40.487283   72096 ssh_runner.go:195] Run: rm -f paused
	I0401 21:08:40.546242   72096 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 21:08:40.548286   72096 out.go:177] * Done! kubectl is now configured to use "custom-flannel-269490" cluster and "default" namespace by default
	I0401 21:08:44.694071   61496 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 21:08:44.694235   61496 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0401 21:08:44.695734   61496 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 21:08:44.695829   61496 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 21:08:44.695942   61496 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 21:08:44.696082   61496 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 21:08:44.696333   61496 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 21:08:44.696433   61496 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 21:08:44.698422   61496 out.go:235]   - Generating certificates and keys ...
	I0401 21:08:44.698535   61496 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 21:08:44.698622   61496 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 21:08:44.698707   61496 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 21:08:44.698782   61496 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0401 21:08:44.698848   61496 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 21:08:44.698894   61496 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0401 21:08:44.698952   61496 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0401 21:08:44.699004   61496 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0401 21:08:44.699067   61496 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 21:08:44.699131   61496 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 21:08:44.699164   61496 kubeadm.go:310] [certs] Using the existing "sa" key
	I0401 21:08:44.699213   61496 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 21:08:44.699257   61496 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 21:08:44.699302   61496 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 21:08:44.699360   61496 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 21:08:44.699410   61496 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 21:08:44.699518   61496 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 21:08:44.699595   61496 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 21:08:44.699630   61496 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 21:08:44.699705   61496 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 21:08:44.701085   61496 out.go:235]   - Booting up control plane ...
	I0401 21:08:44.701182   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 21:08:44.701269   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 21:08:44.701370   61496 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 21:08:44.701492   61496 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 21:08:44.701663   61496 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 21:08:44.701710   61496 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0401 21:08:44.701768   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.701969   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702033   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702244   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702341   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702570   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702639   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.702818   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.702922   61496 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 21:08:44.703238   61496 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 21:08:44.703248   61496 kubeadm.go:310] 
	I0401 21:08:44.703300   61496 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0401 21:08:44.703339   61496 kubeadm.go:310] 		timed out waiting for the condition
	I0401 21:08:44.703347   61496 kubeadm.go:310] 
	I0401 21:08:44.703393   61496 kubeadm.go:310] 	This error is likely caused by:
	I0401 21:08:44.703424   61496 kubeadm.go:310] 		- The kubelet is not running
	I0401 21:08:44.703575   61496 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 21:08:44.703594   61496 kubeadm.go:310] 
	I0401 21:08:44.703747   61496 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 21:08:44.703797   61496 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0401 21:08:44.703843   61496 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0401 21:08:44.703851   61496 kubeadm.go:310] 
	I0401 21:08:44.703979   61496 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 21:08:44.704106   61496 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 21:08:44.704117   61496 kubeadm.go:310] 
	I0401 21:08:44.704223   61496 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 21:08:44.704338   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 21:08:44.704400   61496 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0401 21:08:44.704458   61496 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 21:08:44.704515   61496 kubeadm.go:394] duration metric: took 8m1.369559682s to StartCluster
	I0401 21:08:44.704550   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 21:08:44.704601   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 21:08:44.704607   61496 kubeadm.go:310] 
	I0401 21:08:44.776607   61496 cri.go:89] found id: ""
	I0401 21:08:44.776631   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.776638   61496 logs.go:284] No container was found matching "kube-apiserver"
	I0401 21:08:44.776643   61496 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 21:08:44.776688   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 21:08:44.822697   61496 cri.go:89] found id: ""
	I0401 21:08:44.822724   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.822732   61496 logs.go:284] No container was found matching "etcd"
	I0401 21:08:44.822737   61496 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 21:08:44.822789   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 21:08:44.870855   61496 cri.go:89] found id: ""
	I0401 21:08:44.870884   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.870895   61496 logs.go:284] No container was found matching "coredns"
	I0401 21:08:44.870903   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 21:08:44.870963   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 21:08:44.909983   61496 cri.go:89] found id: ""
	I0401 21:08:44.910010   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.910019   61496 logs.go:284] No container was found matching "kube-scheduler"
	I0401 21:08:44.910025   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 21:08:44.910205   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 21:08:44.947636   61496 cri.go:89] found id: ""
	I0401 21:08:44.947667   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.947677   61496 logs.go:284] No container was found matching "kube-proxy"
	I0401 21:08:44.947684   61496 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 21:08:44.947742   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 21:08:44.987225   61496 cri.go:89] found id: ""
	I0401 21:08:44.987254   61496 logs.go:282] 0 containers: []
	W0401 21:08:44.987265   61496 logs.go:284] No container was found matching "kube-controller-manager"
	I0401 21:08:44.987273   61496 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 21:08:44.987328   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 21:08:45.031455   61496 cri.go:89] found id: ""
	I0401 21:08:45.031483   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.031493   61496 logs.go:284] No container was found matching "kindnet"
	I0401 21:08:45.031498   61496 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 21:08:45.031556   61496 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 21:08:45.073545   61496 cri.go:89] found id: ""
	I0401 21:08:45.073572   61496 logs.go:282] 0 containers: []
	W0401 21:08:45.073582   61496 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0401 21:08:45.073593   61496 logs.go:123] Gathering logs for kubelet ...
	I0401 21:08:45.073604   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 21:08:45.139059   61496 logs.go:123] Gathering logs for dmesg ...
	I0401 21:08:45.139110   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 21:08:45.156271   61496 logs.go:123] Gathering logs for describe nodes ...
	I0401 21:08:45.156309   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 21:08:45.239654   61496 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 21:08:45.239682   61496 logs.go:123] Gathering logs for CRI-O ...
	I0401 21:08:45.239697   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 21:08:45.355473   61496 logs.go:123] Gathering logs for container status ...
	I0401 21:08:45.355501   61496 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0401 21:08:45.401208   61496 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 21:08:45.401255   61496 out.go:270] * 
	W0401 21:08:45.401306   61496 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.401323   61496 out.go:270] * 
	W0401 21:08:45.402124   61496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 21:08:45.405265   61496 out.go:201] 
	W0401 21:08:45.406413   61496 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 21:08:45.406448   61496 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 21:08:45.406470   61496 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 21:08:45.407866   61496 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.499548957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542623499525365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb34bac6-16c5-4a8e-9f4b-33ad799846e9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.500233016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd0d828e-874d-4573-8bf2-a13ed2ec9adc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.500290003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd0d828e-874d-4573-8bf2-a13ed2ec9adc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.500324785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cd0d828e-874d-4573-8bf2-a13ed2ec9adc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.533105231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12ad1d8b-57ec-4010-8811-fcd1375e6c4e name=/runtime.v1.RuntimeService/Version
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.533212072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12ad1d8b-57ec-4010-8811-fcd1375e6c4e name=/runtime.v1.RuntimeService/Version
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.534182113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=222d9b0d-fe70-4d7a-abf2-206492f9b438 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.534563411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542623534538150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=222d9b0d-fe70-4d7a-abf2-206492f9b438 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.535149211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d609b2b-8ff6-4220-8830-7f8303b93750 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.535218197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d609b2b-8ff6-4220-8830-7f8303b93750 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.535250955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8d609b2b-8ff6-4220-8830-7f8303b93750 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.569050977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f459291-e94e-4a30-a58b-2fad03ae427b name=/runtime.v1.RuntimeService/Version
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.569146744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f459291-e94e-4a30-a58b-2fad03ae427b name=/runtime.v1.RuntimeService/Version
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.570502069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45a11798-9704-403e-b931-6b4983624016 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.570909522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542623570881286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45a11798-9704-403e-b931-6b4983624016 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.571832064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2211e4e-5881-4827-9e4c-de256bf65653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.571920209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2211e4e-5881-4827-9e4c-de256bf65653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.572013566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b2211e4e-5881-4827-9e4c-de256bf65653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.605546181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49c738ad-c859-44f2-b9fc-6036216555f1 name=/runtime.v1.RuntimeService/Version
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.605639348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49c738ad-c859-44f2-b9fc-6036216555f1 name=/runtime.v1.RuntimeService/Version
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.606901780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4430ce17-8384-4c00-8167-bc0265a6c1fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.607366068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743542623607341425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4430ce17-8384-4c00-8167-bc0265a6c1fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.607941089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da38790f-b8c6-4d2c-af90-ec1dbbc59192 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.608072637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da38790f-b8c6-4d2c-af90-ec1dbbc59192 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 21:23:43 old-k8s-version-582207 crio[644]: time="2025-04-01 21:23:43.608105059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=da38790f-b8c6-4d2c-af90-ec1dbbc59192 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 21:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054135] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041531] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.204738] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.959861] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.661664] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.677904] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068300] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079515] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.190777] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.171995] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.258506] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.231600] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.068848] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.731705] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[ +11.880365] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 1 21:04] systemd-fstab-generator[5005]: Ignoring "noauto" option for root device
	[Apr 1 21:06] systemd-fstab-generator[5279]: Ignoring "noauto" option for root device
	[  +0.075307] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:23:43 up 23 min,  0 users,  load average: 0.02, 0.03, 0.00
	Linux old-k8s-version-582207 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000c1ca20, 0x48ab5d6, 0x3, 0xc000b7dbf0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000c1ca20, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b7dbf0, 0x24, 0x0, ...)
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: net.(*Dialer).DialContext(0xc000afee40, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b7dbf0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b04ca0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b7dbf0, 0x24, 0x60, 0x7fe1c0bd33b0, 0x118, ...)
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: net/http.(*Transport).dial(0xc000a2be00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b7dbf0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: net/http.(*Transport).dialConn(0xc000a2be00, 0x4f7fe00, 0xc000052030, 0x0, 0xc000bfac60, 0x5, 0xc000b7dbf0, 0x24, 0x0, 0xc000bdeb40, ...)
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: net/http.(*Transport).dialConnFor(0xc000a2be00, 0xc000b706e0)
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]: created by net/http.(*Transport).queueForDial
	Apr 01 21:23:38 old-k8s-version-582207 kubelet[7103]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 01 21:23:39 old-k8s-version-582207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 175.
	Apr 01 21:23:39 old-k8s-version-582207 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 01 21:23:39 old-k8s-version-582207 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 01 21:23:39 old-k8s-version-582207 kubelet[7112]: I0401 21:23:39.358284    7112 server.go:416] Version: v1.20.0
	Apr 01 21:23:39 old-k8s-version-582207 kubelet[7112]: I0401 21:23:39.358564    7112 server.go:837] Client rotation is on, will bootstrap in background
	Apr 01 21:23:39 old-k8s-version-582207 kubelet[7112]: I0401 21:23:39.360497    7112 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 01 21:23:39 old-k8s-version-582207 kubelet[7112]: W0401 21:23:39.361440    7112 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 01 21:23:39 old-k8s-version-582207 kubelet[7112]: I0401 21:23:39.361655    7112 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 2 (224.53086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-582207" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (355.46s)

                                                
                                    

Test pass (270/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.55
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.32.2/json-events 14.02
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.14
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 112.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 204.08
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 12.51
35 TestAddons/parallel/Registry 17.49
37 TestAddons/parallel/InspektorGadget 12.03
38 TestAddons/parallel/MetricsServer 5.94
40 TestAddons/parallel/CSI 59.85
41 TestAddons/parallel/Headlamp 21.7
42 TestAddons/parallel/CloudSpanner 6.24
43 TestAddons/parallel/LocalPath 60.7
44 TestAddons/parallel/NvidiaDevicePlugin 6.94
45 TestAddons/parallel/Yakd 11.42
47 TestAddons/StoppedEnableDisable 91.25
48 TestCertOptions 68.8
49 TestCertExpiration 266.23
51 TestForceSystemdFlag 78.95
52 TestForceSystemdEnv 45.12
54 TestKVMDriverInstallOrUpdate 5.67
58 TestErrorSpam/setup 45.11
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.73
61 TestErrorSpam/pause 1.57
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 5.69
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 54.32
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.81
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.26
75 TestFunctional/serial/CacheCmd/cache/add_local 2.25
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 36.3
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.52
86 TestFunctional/serial/LogsFileCmd 1.43
87 TestFunctional/serial/InvalidService 4.88
89 TestFunctional/parallel/ConfigCmd 0.32
90 TestFunctional/parallel/DashboardCmd 13.84
91 TestFunctional/parallel/DryRun 0.26
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 9.26
98 TestFunctional/parallel/AddonsCmd 0.12
99 TestFunctional/parallel/PersistentVolumeClaim 44.36
101 TestFunctional/parallel/SSHCmd 0.43
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 28.97
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.47
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
113 TestFunctional/parallel/License 0.7
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.18
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.63
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.54
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 8.63
122 TestFunctional/parallel/ImageCommands/Setup 2.96
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
124 TestFunctional/parallel/ProfileCmd/profile_list 0.48
125 TestFunctional/parallel/MountCmd/any-port 9.98
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.99
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.9
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.16
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.87
137 TestFunctional/parallel/ServiceCmd/List 0.46
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
139 TestFunctional/parallel/MountCmd/specific-port 2.06
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
141 TestFunctional/parallel/ServiceCmd/Format 0.31
142 TestFunctional/parallel/ServiceCmd/URL 0.38
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.46
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 215.59
161 TestMultiControlPlane/serial/DeployApp 7.41
162 TestMultiControlPlane/serial/PingHostFromPods 1.2
163 TestMultiControlPlane/serial/AddWorkerNode 58.84
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
166 TestMultiControlPlane/serial/CopyFile 13.26
167 TestMultiControlPlane/serial/StopSecondaryNode 91.65
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
169 TestMultiControlPlane/serial/RestartSecondaryNode 57.25
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 436.94
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.74
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
174 TestMultiControlPlane/serial/StopCluster 272.92
175 TestMultiControlPlane/serial/RestartCluster 123.19
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
177 TestMultiControlPlane/serial/AddSecondaryNode 79.64
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
182 TestJSONOutput/start/Command 51.65
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.77
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.66
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.39
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 88.69
214 TestMountStart/serial/StartWithMountFirst 26.66
215 TestMountStart/serial/VerifyMountFirst 0.38
216 TestMountStart/serial/StartWithMountSecond 29.8
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.68
219 TestMountStart/serial/VerifyMountPostDelete 0.38
220 TestMountStart/serial/Stop 1.29
221 TestMountStart/serial/RestartStopped 23.57
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 117.68
226 TestMultiNode/serial/DeployApp2Nodes 5.96
227 TestMultiNode/serial/PingHostFrom2Pods 0.78
228 TestMultiNode/serial/AddNode 51.9
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.6
231 TestMultiNode/serial/CopyFile 7.38
232 TestMultiNode/serial/StopNode 2.38
233 TestMultiNode/serial/StartAfterStop 41.4
234 TestMultiNode/serial/RestartKeepsNodes 349.98
235 TestMultiNode/serial/DeleteNode 2.74
236 TestMultiNode/serial/StopMultiNode 181.86
237 TestMultiNode/serial/RestartMultiNode 195.74
238 TestMultiNode/serial/ValidateNameConflict 44.89
245 TestScheduledStopUnix 115.71
249 TestRunningBinaryUpgrade 222.69
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 93.78
256 TestNoKubernetes/serial/StartWithStopK8s 20.17
265 TestPause/serial/Start 54.62
266 TestNoKubernetes/serial/Start 51.55
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
268 TestNoKubernetes/serial/ProfileList 1.89
270 TestNoKubernetes/serial/Stop 1.73
271 TestNoKubernetes/serial/StartNoArgs 27.44
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
280 TestNetworkPlugins/group/false 3.02
284 TestStoppedBinaryUpgrade/Setup 2.26
285 TestStoppedBinaryUpgrade/Upgrade 142.37
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
290 TestStartStop/group/no-preload/serial/FirstStart 111.89
292 TestStartStop/group/embed-certs/serial/FirstStart 90.91
293 TestStartStop/group/embed-certs/serial/DeployApp 11.35
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.09
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
297 TestStartStop/group/embed-certs/serial/Stop 90.87
298 TestStartStop/group/no-preload/serial/DeployApp 10.28
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
300 TestStartStop/group/no-preload/serial/Stop 91.03
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.06
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
305 TestStartStop/group/embed-certs/serial/SecondStart 334.99
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 361.27
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 339.9
312 TestStartStop/group/old-k8s-version/serial/Stop 2.31
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/embed-certs/serial/Pause 2.83
320 TestStartStop/group/newest-cni/serial/FirstStart 48.85
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/no-preload/serial/Pause 3.36
325 TestNetworkPlugins/group/auto/Start 87.98
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.66
329 TestStartStop/group/newest-cni/serial/Stop 11.38
330 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
331 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
332 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
334 TestStartStop/group/newest-cni/serial/SecondStart 45.89
335 TestNetworkPlugins/group/flannel/Start 95.07
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
339 TestStartStop/group/newest-cni/serial/Pause 2.37
340 TestNetworkPlugins/group/enable-default-cni/Start 87.22
341 TestNetworkPlugins/group/auto/KubeletFlags 0.22
342 TestNetworkPlugins/group/auto/NetCatPod 11.25
343 TestNetworkPlugins/group/auto/DNS 0.16
344 TestNetworkPlugins/group/auto/Localhost 0.13
345 TestNetworkPlugins/group/auto/HairPin 0.12
346 TestNetworkPlugins/group/bridge/Start 60.94
347 TestNetworkPlugins/group/flannel/ControllerPod 6.01
348 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
349 TestNetworkPlugins/group/flannel/NetCatPod 10.24
350 TestNetworkPlugins/group/flannel/DNS 0.14
351 TestNetworkPlugins/group/flannel/Localhost 0.12
352 TestNetworkPlugins/group/flannel/HairPin 0.11
353 TestNetworkPlugins/group/calico/Start 86.52
354 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
355 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
356 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
357 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
358 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
359 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
360 TestNetworkPlugins/group/bridge/NetCatPod 11.27
361 TestNetworkPlugins/group/kindnet/Start 69.77
362 TestNetworkPlugins/group/bridge/DNS 0.17
363 TestNetworkPlugins/group/bridge/Localhost 0.15
364 TestNetworkPlugins/group/bridge/HairPin 0.13
365 TestNetworkPlugins/group/custom-flannel/Start 79.72
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.32
368 TestNetworkPlugins/group/calico/NetCatPod 12.58
369 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/DNS 0.14
371 TestNetworkPlugins/group/calico/Localhost 0.13
372 TestNetworkPlugins/group/calico/HairPin 0.12
373 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
374 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
375 TestNetworkPlugins/group/kindnet/DNS 0.18
376 TestNetworkPlugins/group/kindnet/Localhost 0.14
377 TestNetworkPlugins/group/kindnet/HairPin 0.18
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
381 TestNetworkPlugins/group/custom-flannel/DNS 0.15
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (26.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-944346 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-944346 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.554238972s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0401 19:45:26.290585   16301 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0401 19:45:26.290677   16301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-944346
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-944346: exit status 85 (58.73578ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-944346 | jenkins | v1.35.0 | 01 Apr 25 19:44 UTC |          |
	|         | -p download-only-944346        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 19:44:59
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:44:59.776434   16313 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:44:59.776556   16313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:44:59.776569   16313 out.go:358] Setting ErrFile to fd 2...
	I0401 19:44:59.776574   16313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:44:59.776783   16313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	W0401 19:44:59.776931   16313 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20506-9129/.minikube/config/config.json: open /home/jenkins/minikube-integration/20506-9129/.minikube/config/config.json: no such file or directory
	I0401 19:44:59.777542   16313 out.go:352] Setting JSON to true
	I0401 19:44:59.778463   16313 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1644,"bootTime":1743535056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:44:59.778567   16313 start.go:139] virtualization: kvm guest
	I0401 19:44:59.781053   16313 out.go:97] [download-only-944346] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0401 19:44:59.781167   16313 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball: no such file or directory
	I0401 19:44:59.781201   16313 notify.go:220] Checking for updates...
	I0401 19:44:59.782824   16313 out.go:169] MINIKUBE_LOCATION=20506
	I0401 19:44:59.784493   16313 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:44:59.785895   16313 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 19:44:59.787203   16313 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 19:44:59.788524   16313 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0401 19:44:59.790933   16313 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 19:44:59.791146   16313 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:44:59.898327   16313 out.go:97] Using the kvm2 driver based on user configuration
	I0401 19:44:59.898365   16313 start.go:297] selected driver: kvm2
	I0401 19:44:59.898372   16313 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:44:59.898791   16313 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:44:59.898963   16313 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:44:59.913712   16313 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 19:44:59.913762   16313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:44:59.914319   16313 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0401 19:44:59.914462   16313 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 19:44:59.914489   16313 cni.go:84] Creating CNI manager for ""
	I0401 19:44:59.914532   16313 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:44:59.914539   16313 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:44:59.914597   16313 start.go:340] cluster config:
	{Name:download-only-944346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-944346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:44:59.914760   16313 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:44:59.916891   16313 out.go:97] Downloading VM boot image ...
	I0401 19:44:59.916916   16313 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20506-9129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0401 19:45:10.637725   16313 out.go:97] Starting "download-only-944346" primary control-plane node in "download-only-944346" cluster
	I0401 19:45:10.637756   16313 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:45:10.750421   16313 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:45:10.750447   16313 cache.go:56] Caching tarball of preloaded images
	I0401 19:45:10.750607   16313 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:45:10.752431   16313 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0401 19:45:10.752457   16313 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 19:45:10.866841   16313 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:45:24.250697   16313 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 19:45:24.250807   16313 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-944346 host does not exist
	  To start a cluster, run: "minikube start -p download-only-944346"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-944346
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (14.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-774044 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-774044 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.020796679s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (14.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0401 19:45:40.635018   16301 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0401 19:45:40.635058   16301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-774044
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-774044: exit status 85 (61.101586ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-944346 | jenkins | v1.35.0 | 01 Apr 25 19:44 UTC |                     |
	|         | -p download-only-944346        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| delete  | -p download-only-944346        | download-only-944346 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| start   | -o=json --download-only        | download-only-774044 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC |                     |
	|         | -p download-only-774044        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 19:45:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:45:26.652043   16574 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:45:26.652298   16574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:26.652308   16574 out.go:358] Setting ErrFile to fd 2...
	I0401 19:45:26.652312   16574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:26.652545   16574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 19:45:26.653133   16574 out.go:352] Setting JSON to true
	I0401 19:45:26.653980   16574 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1671,"bootTime":1743535056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:45:26.654061   16574 start.go:139] virtualization: kvm guest
	I0401 19:45:26.656230   16574 out.go:97] [download-only-774044] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:45:26.656364   16574 notify.go:220] Checking for updates...
	I0401 19:45:26.658034   16574 out.go:169] MINIKUBE_LOCATION=20506
	I0401 19:45:26.659469   16574 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:45:26.660986   16574 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 19:45:26.662490   16574 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 19:45:26.663758   16574 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0401 19:45:26.666024   16574 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 19:45:26.666241   16574 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:45:26.698867   16574 out.go:97] Using the kvm2 driver based on user configuration
	I0401 19:45:26.698896   16574 start.go:297] selected driver: kvm2
	I0401 19:45:26.698912   16574 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:45:26.699203   16574 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:45:26.699281   16574 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20506-9129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:45:26.714133   16574 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0401 19:45:26.714181   16574 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:45:26.714749   16574 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0401 19:45:26.714877   16574 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 19:45:26.714902   16574 cni.go:84] Creating CNI manager for ""
	I0401 19:45:26.714942   16574 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:45:26.714951   16574 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:45:26.715000   16574 start.go:340] cluster config:
	{Name:download-only-774044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-774044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:45:26.715084   16574 iso.go:125] acquiring lock: {Name:mkb4d16c66b9a96e560351dc0c0ad5272b583791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:45:26.716718   16574 out.go:97] Starting "download-only-774044" primary control-plane node in "download-only-774044" cluster
	I0401 19:45:26.716732   16574 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:45:27.322910   16574 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 19:45:27.322954   16574 cache.go:56] Caching tarball of preloaded images
	I0401 19:45:27.323087   16574 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:45:27.324906   16574 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0401 19:45:27.324919   16574 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0401 19:45:27.433736   16574 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20506-9129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-774044 host does not exist
	  To start a cluster, run: "minikube start -p download-only-774044"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-774044
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0401 19:45:41.223190   16301 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-856518 --alsologtostderr --binary-mirror http://127.0.0.1:41083 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-856518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-856518
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (112.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-838550 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-838550 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m51.759500969s)
helpers_test.go:175: Cleaning up "offline-crio-838550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-838550
--- PASS: TestOffline (112.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-357468
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-357468: exit status 85 (47.047114ms)

                                                
                                                
-- stdout --
	* Profile "addons-357468" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-357468"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-357468
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-357468: exit status 85 (50.383089ms)

                                                
                                                
-- stdout --
	* Profile "addons-357468" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-357468"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (204.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-357468 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-357468 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m24.076122158s)
--- PASS: TestAddons/Setup (204.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-357468 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-357468 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-357468 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-357468 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d351faea-3a0e-4224-9e5f-f278ee6d59a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d351faea-3a0e-4224-9e5f-f278ee6d59a9] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.004355974s
addons_test.go:633: (dbg) Run:  kubectl --context addons-357468 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-357468 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-357468 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.56241ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-9sz7s" [6aaae033-47aa-4ef3-84b0-7a7e433ed652] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004182437s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tr78m" [2b69a23c-961a-4f69-bdf5-7b655e5ab42c] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004515669s
addons_test.go:331: (dbg) Run:  kubectl --context addons-357468 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-357468 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-357468 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.682167755s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 ip
2025/04/01 19:49:44 [DEBUG] GET http://192.168.39.65:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pqmxg" [f1e8b087-3b80-408c-aa8e-9572f74cacbf] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00343726s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable inspektor-gadget --alsologtostderr -v=1: (6.021177003s)
--- PASS: TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.323985ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-5gb2p" [8290b608-2b6f-48d1-b0e9-b3224861fc9f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004284754s
addons_test.go:402: (dbg) Run:  kubectl --context addons-357468 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0401 19:49:33.474801   16301 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0401 19:49:33.487201   16301 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0401 19:49:33.487229   16301 kapi.go:107] duration metric: took 12.462506ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 12.473769ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-357468 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-357468 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [30dc6957-80eb-44c4-8d95-ec99d67991ea] Pending
helpers_test.go:344: "task-pv-pod" [30dc6957-80eb-44c4-8d95-ec99d67991ea] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [30dc6957-80eb-44c4-8d95-ec99d67991ea] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.010901293s
addons_test.go:511: (dbg) Run:  kubectl --context addons-357468 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-357468 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-357468 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-357468 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-357468 delete pod task-pv-pod: (1.93778731s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-357468 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-357468 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-357468 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0d4da170-5d2d-411b-8f98-496e54109da9] Pending
helpers_test.go:344: "task-pv-pod-restore" [0d4da170-5d2d-411b-8f98-496e54109da9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0d4da170-5d2d-411b-8f98-496e54109da9] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00422948s
addons_test.go:553: (dbg) Run:  kubectl --context addons-357468 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-357468 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-357468 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.96523568s)
--- PASS: TestAddons/parallel/CSI (59.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-357468 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-t4wgq" [57708a98-767c-46f2-ae7e-a89cd0e742c3] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-t4wgq" [57708a98-767c-46f2-ae7e-a89cd0e742c3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-t4wgq" [57708a98-767c-46f2-ae7e-a89cd0e742c3] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003530179s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable headlamp --alsologtostderr -v=1: (5.749825157s)
--- PASS: TestAddons/parallel/Headlamp (21.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-xftcs" [45983e73-9ffe-4cce-b1a3-91412fb2461e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003023753s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable cloud-spanner --alsologtostderr -v=1: (1.231539193s)
--- PASS: TestAddons/parallel/CloudSpanner (6.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.7s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-357468 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-357468 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ad645611-ceba-42da-8d7c-55ee35c1da33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ad645611-ceba-42da-8d7c-55ee35c1da33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ad645611-ceba-42da-8d7c-55ee35c1da33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.004094203s
addons_test.go:906: (dbg) Run:  kubectl --context addons-357468 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 ssh "cat /opt/local-path-provisioner/pvc-0afaf634-c3c1-425b-9181-27260ba53259_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-357468 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-357468 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.735717957s)
--- PASS: TestAddons/parallel/LocalPath (60.70s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vtmjq" [62241ccb-0ac0-423e-b2ec-9c837985d9ab] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003475567s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-f67bp" [ecf278e1-0e6e-4baf-ac8c-f0f8223229af] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004474521s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-357468 addons disable yakd --alsologtostderr -v=1: (6.410891075s)
--- PASS: TestAddons/parallel/Yakd (11.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-357468
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-357468: (1m30.978439273s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-357468
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-357468
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-357468
--- PASS: TestAddons/StoppedEnableDisable (91.25s)

                                                
                                    
x
+
TestCertOptions (68.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-454573 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-454573 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m7.377775356s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-454573 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-454573 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-454573 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-454573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-454573
--- PASS: TestCertOptions (68.80s)

                                                
                                    
x
+
TestCertExpiration (266.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-808084 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-808084 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (45.579565857s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-808084 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-808084 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.634301586s)
helpers_test.go:175: Cleaning up "cert-expiration-808084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-808084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-808084: (1.014202801s)
--- PASS: TestCertExpiration (266.23s)

                                                
                                    
x
+
TestForceSystemdFlag (78.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-846715 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-846715 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.92886174s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-846715 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-846715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-846715
--- PASS: TestForceSystemdFlag (78.95s)

                                                
                                    
x
+
TestForceSystemdEnv (45.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-818542 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-818542 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.13864909s)
helpers_test.go:175: Cleaning up "force-systemd-env-818542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-818542
--- PASS: TestForceSystemdEnv (45.12s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.67s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0401 20:51:11.167568   16301 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 20:51:11.167755   16301 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0401 20:51:11.194788   16301 install.go:62] docker-machine-driver-kvm2: exit status 1
W0401 20:51:11.194986   16301 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0401 20:51:11.195059   16301 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate552366716/001/docker-machine-driver-kvm2
I0401 20:51:11.442930   16301 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate552366716/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0003f7a28 gz:0xc0003f7ab0 tar:0xc0003f7a60 tar.bz2:0xc0003f7a70 tar.gz:0xc0003f7a80 tar.xz:0xc0003f7a90 tar.zst:0xc0003f7aa0 tbz2:0xc0003f7a70 tgz:0xc0003f7a80 txz:0xc0003f7a90 tzst:0xc0003f7aa0 xz:0xc0003f7ab8 zip:0xc0003f7ac0 zst:0xc0003f7ad0] Getters:map[file:0xc0009ec200 http:0xc000c9d540 https:0xc000c9d590] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0401 20:51:11.443004   16301 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate552366716/001/docker-machine-driver-kvm2
I0401 20:51:14.804025   16301 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 20:51:14.804104   16301 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0401 20:51:14.837327   16301 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0401 20:51:14.837358   16301 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0401 20:51:14.837414   16301 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0401 20:51:14.837438   16301 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate552366716/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.67s)

                                                
                                    
x
+
TestErrorSpam/setup (45.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-366753 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-366753 --driver=kvm2  --container-runtime=crio
E0401 19:54:06.658368   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:06.664738   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:06.676114   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:06.697495   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:06.738935   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:06.820405   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:06.982012   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:07.303710   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:07.945745   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:09.227385   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:11.790365   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:16.912137   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:27.153664   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:54:47.635491   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-366753 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-366753 --driver=kvm2  --container-runtime=crio: (45.111759198s)
--- PASS: TestErrorSpam/setup (45.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 stop: (2.298065906s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 stop: (1.751346336s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-366753 --log_dir /tmp/nospam-366753 stop: (1.642066983s)
--- PASS: TestErrorSpam/stop (5.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20506-9129/.minikube/files/etc/test/nested/copy/16301/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366801 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0401 19:55:28.597924   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-366801 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.322042152s)
--- PASS: TestFunctional/serial/StartWithProxy (54.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0401 19:55:54.641091   16301 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366801 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-366801 --alsologtostderr -v=8: (40.811179128s)
functional_test.go:680: soft start took 40.811876547s for "functional-366801" cluster.
I0401 19:56:35.452570   16301 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (40.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-366801 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 cache add registry.k8s.io/pause:3.1: (1.081647625s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 cache add registry.k8s.io/pause:3.3: (1.094826619s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 cache add registry.k8s.io/pause:latest: (1.085918037s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-366801 /tmp/TestFunctionalserialCacheCmdcacheadd_local1922104267/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cache add minikube-local-cache-test:functional-366801
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 cache add minikube-local-cache-test:functional-366801: (1.939234145s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cache delete minikube-local-cache-test:functional-366801
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-366801
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.897653ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 cache reload: (1.035150649s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 kubectl -- --context functional-366801 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-366801 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366801 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0401 19:56:50.519974   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-366801 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.300345984s)
functional_test.go:778: restart took 36.300440309s for "functional-366801" cluster.
I0401 19:57:19.738742   16301 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (36.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-366801 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 logs: (1.516036999s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 logs --file /tmp/TestFunctionalserialLogsFileCmd2773458717/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 logs --file /tmp/TestFunctionalserialLogsFileCmd2773458717/001/logs.txt: (1.424340713s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-366801 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-366801
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-366801: exit status 115 (263.075599ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.138:31329 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-366801 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-366801 delete -f testdata/invalidsvc.yaml: (1.415052871s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 config get cpus: exit status 14 (60.971141ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 config get cpus: exit status 14 (45.240877ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-366801 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-366801 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24124: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366801 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-366801 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (129.713462ms)

                                                
                                                
-- stdout --
	* [functional-366801] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:57:31.862157   23957 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:57:31.862298   23957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:57:31.862308   23957 out.go:358] Setting ErrFile to fd 2...
	I0401 19:57:31.862312   23957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:57:31.862541   23957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 19:57:31.863064   23957 out.go:352] Setting JSON to false
	I0401 19:57:31.863980   23957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2396,"bootTime":1743535056,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:57:31.864038   23957 start.go:139] virtualization: kvm guest
	I0401 19:57:31.865738   23957 out.go:177] * [functional-366801] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:57:31.866798   23957 notify.go:220] Checking for updates...
	I0401 19:57:31.866818   23957 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 19:57:31.867952   23957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:57:31.869278   23957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 19:57:31.870465   23957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 19:57:31.871574   23957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:57:31.872572   23957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:57:31.874197   23957 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:57:31.874798   23957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:57:31.874888   23957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:57:31.890355   23957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45415
	I0401 19:57:31.890862   23957 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:57:31.891472   23957 main.go:141] libmachine: Using API Version  1
	I0401 19:57:31.891501   23957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:57:31.891869   23957 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:57:31.892065   23957 main.go:141] libmachine: (functional-366801) Calling .DriverName
	I0401 19:57:31.892281   23957 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:57:31.892602   23957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:57:31.892647   23957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:57:31.908218   23957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34605
	I0401 19:57:31.908673   23957 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:57:31.909177   23957 main.go:141] libmachine: Using API Version  1
	I0401 19:57:31.909214   23957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:57:31.909507   23957 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:57:31.909674   23957 main.go:141] libmachine: (functional-366801) Calling .DriverName
	I0401 19:57:31.942895   23957 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:57:31.944038   23957 start.go:297] selected driver: kvm2
	I0401 19:57:31.944050   23957 start.go:901] validating driver "kvm2" against &{Name:functional-366801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-366801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.138 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:57:31.944169   23957 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:57:31.945965   23957 out.go:201] 
	W0401 19:57:31.947049   23957 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0401 19:57:31.948168   23957 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366801 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366801 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-366801 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.728254ms)

                                                
                                                
-- stdout --
	* [functional-366801] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:57:32.136848   24012 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:57:32.136999   24012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:57:32.137011   24012 out.go:358] Setting ErrFile to fd 2...
	I0401 19:57:32.137018   24012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:57:32.137392   24012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 19:57:32.138169   24012 out.go:352] Setting JSON to false
	I0401 19:57:32.139536   24012 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2396,"bootTime":1743535056,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:57:32.139660   24012 start.go:139] virtualization: kvm guest
	I0401 19:57:32.141670   24012 out.go:177] * [functional-366801] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0401 19:57:32.143131   24012 notify.go:220] Checking for updates...
	I0401 19:57:32.143176   24012 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 19:57:32.144610   24012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:57:32.145840   24012 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 19:57:32.146993   24012 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 19:57:32.148151   24012 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:57:32.149400   24012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:57:32.151059   24012 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:57:32.151631   24012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:57:32.151698   24012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:57:32.167845   24012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0401 19:57:32.168416   24012 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:57:32.168954   24012 main.go:141] libmachine: Using API Version  1
	I0401 19:57:32.168984   24012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:57:32.169450   24012 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:57:32.169632   24012 main.go:141] libmachine: (functional-366801) Calling .DriverName
	I0401 19:57:32.169928   24012 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:57:32.170369   24012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:57:32.170413   24012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:57:32.186441   24012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0401 19:57:32.186987   24012 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:57:32.187480   24012 main.go:141] libmachine: Using API Version  1
	I0401 19:57:32.187506   24012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:57:32.187818   24012 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:57:32.187963   24012 main.go:141] libmachine: (functional-366801) Calling .DriverName
	I0401 19:57:32.222068   24012 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0401 19:57:32.223135   24012 start.go:297] selected driver: kvm2
	I0401 19:57:32.223150   24012 start.go:901] validating driver "kvm2" against &{Name:functional-366801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-366801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.138 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:57:32.223266   24012 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:57:32.225127   24012 out.go:201] 
	W0401 19:57:32.226643   24012 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0401 19:57:32.227779   24012 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-366801 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-366801 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-pswf6" [0feaf211-5d81-4361-a60e-d7d87ae201dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-pswf6" [0feaf211-5d81-4361-a60e-d7d87ae201dc] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.344757036s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.138:30362
functional_test.go:1692: http://192.168.39.138:30362: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-pswf6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.138:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.138:30362
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [be6c379c-87fc-488d-8b70-bc01466b6313] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003961854s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-366801 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-366801 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-366801 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-366801 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e02c1c38-37ff-41b2-a474-89f6e20093df] Pending
helpers_test.go:344: "sp-pod" [e02c1c38-37ff-41b2-a474-89f6e20093df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/04/01 19:57:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [e02c1c38-37ff-41b2-a474-89f6e20093df] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.003574459s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-366801 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-366801 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-366801 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2fec0564-d290-4c33-a8e1-3c6a8358c63e] Pending
helpers_test.go:344: "sp-pod" [2fec0564-d290-4c33-a8e1-3c6a8358c63e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2fec0564-d290-4c33-a8e1-3c6a8358c63e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00375421s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-366801 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh -n functional-366801 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cp functional-366801:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4272329272/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh -n functional-366801 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh -n functional-366801 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-366801 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-p5gtj" [805c72cb-9e37-4e4f-b3e7-6bbc4dbc9833] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-p5gtj" [805c72cb-9e37-4e4f-b3e7-6bbc4dbc9833] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.003873828s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-366801 exec mysql-58ccfd96bb-p5gtj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-366801 exec mysql-58ccfd96bb-p5gtj -- mysql -ppassword -e "show databases;": exit status 1 (150.068224ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0401 19:58:07.006394   16301 retry.go:31] will retry after 1.316069307s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-366801 exec mysql-58ccfd96bb-p5gtj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-366801 exec mysql-58ccfd96bb-p5gtj -- mysql -ppassword -e "show databases;": exit status 1 (127.876261ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0401 19:58:08.451453   16301 retry.go:31] will retry after 1.967226222s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-366801 exec mysql-58ccfd96bb-p5gtj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/16301/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo cat /etc/test/nested/copy/16301/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/16301.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo cat /etc/ssl/certs/16301.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/16301.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo cat /usr/share/ca-certificates/16301.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/163012.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo cat /etc/ssl/certs/163012.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/163012.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo cat /usr/share/ca-certificates/163012.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-366801 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh "sudo systemctl is-active docker": exit status 1 (231.489697ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh "sudo systemctl is-active containerd": exit status 1 (214.966221ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-366801 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-366801 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-kll6w" [f3c84968-e50b-4a6d-b00f-09c412902a34] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-kll6w" [f3c84968-e50b-4a6d-b00f-09c412902a34] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004193228s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366801 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-366801
localhost/kicbase/echo-server:functional-366801
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366801 image ls --format short --alsologtostderr:
I0401 19:57:47.071672   25372 out.go:345] Setting OutFile to fd 1 ...
I0401 19:57:47.071795   25372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:47.071805   25372 out.go:358] Setting ErrFile to fd 2...
I0401 19:57:47.071810   25372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:47.072041   25372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
I0401 19:57:47.072573   25372 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:47.072690   25372 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:47.073047   25372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:47.073102   25372 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:47.088627   25372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
I0401 19:57:47.089132   25372 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:47.089695   25372 main.go:141] libmachine: Using API Version  1
I0401 19:57:47.089726   25372 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:47.090108   25372 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:47.090300   25372 main.go:141] libmachine: (functional-366801) Calling .GetState
I0401 19:57:47.092334   25372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:47.092390   25372 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:47.107818   25372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
I0401 19:57:47.108339   25372 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:47.108796   25372 main.go:141] libmachine: Using API Version  1
I0401 19:57:47.108817   25372 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:47.109237   25372 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:47.109417   25372 main.go:141] libmachine: (functional-366801) Calling .DriverName
I0401 19:57:47.109643   25372 ssh_runner.go:195] Run: systemctl --version
I0401 19:57:47.109667   25372 main.go:141] libmachine: (functional-366801) Calling .GetSSHHostname
I0401 19:57:47.112332   25372 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:47.112710   25372 main.go:141] libmachine: (functional-366801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4a:0f", ip: ""} in network mk-functional-366801: {Iface:virbr1 ExpiryTime:2025-04-01 20:55:15 +0000 UTC Type:0 Mac:52:54:00:9d:4a:0f Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:functional-366801 Clientid:01:52:54:00:9d:4a:0f}
I0401 19:57:47.112741   25372 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined IP address 192.168.39.138 and MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:47.112860   25372 main.go:141] libmachine: (functional-366801) Calling .GetSSHPort
I0401 19:57:47.113005   25372 main.go:141] libmachine: (functional-366801) Calling .GetSSHKeyPath
I0401 19:57:47.113134   25372 main.go:141] libmachine: (functional-366801) Calling .GetSSHUsername
I0401 19:57:47.113311   25372 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/functional-366801/id_rsa Username:docker}
I0401 19:57:47.209086   25372 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 19:57:47.249172   25372 main.go:141] libmachine: Making call to close driver server
I0401 19:57:47.249192   25372 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:47.249483   25372 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:47.249522   25372 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 19:57:47.249534   25372 main.go:141] libmachine: (functional-366801) DBG | Closing plugin on server side
I0401 19:57:47.249548   25372 main.go:141] libmachine: Making call to close driver server
I0401 19:57:47.249559   25372 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:47.249788   25372 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:47.249799   25372 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366801 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-366801  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| localhost/minikube-local-cache-test     | functional-366801  | 3930cd1aadc63 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366801 image ls --format table --alsologtostderr:
I0401 19:57:54.677374   25556 out.go:345] Setting OutFile to fd 1 ...
I0401 19:57:54.677515   25556 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:54.677527   25556 out.go:358] Setting ErrFile to fd 2...
I0401 19:57:54.677535   25556 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:54.677796   25556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
I0401 19:57:54.678602   25556 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:54.678747   25556 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:54.679275   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:54.679334   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:54.694390   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
I0401 19:57:54.694856   25556 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:54.695436   25556 main.go:141] libmachine: Using API Version  1
I0401 19:57:54.695471   25556 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:54.695815   25556 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:54.696011   25556 main.go:141] libmachine: (functional-366801) Calling .GetState
I0401 19:57:54.697928   25556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:54.697975   25556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:54.712943   25556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
I0401 19:57:54.713387   25556 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:54.713960   25556 main.go:141] libmachine: Using API Version  1
I0401 19:57:54.713993   25556 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:54.714348   25556 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:54.714548   25556 main.go:141] libmachine: (functional-366801) Calling .DriverName
I0401 19:57:54.714762   25556 ssh_runner.go:195] Run: systemctl --version
I0401 19:57:54.714788   25556 main.go:141] libmachine: (functional-366801) Calling .GetSSHHostname
I0401 19:57:54.717595   25556 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:54.718051   25556 main.go:141] libmachine: (functional-366801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4a:0f", ip: ""} in network mk-functional-366801: {Iface:virbr1 ExpiryTime:2025-04-01 20:55:15 +0000 UTC Type:0 Mac:52:54:00:9d:4a:0f Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:functional-366801 Clientid:01:52:54:00:9d:4a:0f}
I0401 19:57:54.718078   25556 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined IP address 192.168.39.138 and MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:54.718268   25556 main.go:141] libmachine: (functional-366801) Calling .GetSSHPort
I0401 19:57:54.718470   25556 main.go:141] libmachine: (functional-366801) Calling .GetSSHKeyPath
I0401 19:57:54.718631   25556 main.go:141] libmachine: (functional-366801) Calling .GetSSHUsername
I0401 19:57:54.718782   25556 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/functional-366801/id_rsa Username:docker}
I0401 19:57:54.833547   25556 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 19:57:55.162281   25556 main.go:141] libmachine: Making call to close driver server
I0401 19:57:55.162308   25556 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:55.162627   25556 main.go:141] libmachine: (functional-366801) DBG | Closing plugin on server side
I0401 19:57:55.162695   25556 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:55.162713   25556 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 19:57:55.162726   25556 main.go:141] libmachine: Making call to close driver server
I0401 19:57:55.162738   25556 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:55.162935   25556 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:55.162989   25556 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 19:57:55.162954   25556 main.go:141] libmachine: (functional-366801) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366801 image ls --format yaml --alsologtostderr:
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3930cd1aadc630ae0f89adf408d01b08fd40096619342cb7a2c25d8bcaed9683
repoDigests:
- localhost/minikube-local-cache-test@sha256:1d31671c044bef32cfc2d9fb7dfc87a5225d56af468ed2e3e627dd3c7dc93676
repoTags:
- localhost/minikube-local-cache-test:functional-366801
size: "3330"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-366801
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366801 image ls --format yaml --alsologtostderr:
I0401 19:57:47.299466   25396 out.go:345] Setting OutFile to fd 1 ...
I0401 19:57:47.299727   25396 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:47.299738   25396 out.go:358] Setting ErrFile to fd 2...
I0401 19:57:47.299743   25396 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:47.299954   25396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
I0401 19:57:47.300526   25396 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:47.300628   25396 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:47.300982   25396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:47.301028   25396 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:47.315962   25396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
I0401 19:57:47.316433   25396 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:47.316897   25396 main.go:141] libmachine: Using API Version  1
I0401 19:57:47.316923   25396 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:47.317225   25396 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:47.317444   25396 main.go:141] libmachine: (functional-366801) Calling .GetState
I0401 19:57:47.318977   25396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:47.319014   25396 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:47.333584   25396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39187
I0401 19:57:47.333998   25396 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:47.334505   25396 main.go:141] libmachine: Using API Version  1
I0401 19:57:47.334522   25396 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:47.334917   25396 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:47.335077   25396 main.go:141] libmachine: (functional-366801) Calling .DriverName
I0401 19:57:47.335266   25396 ssh_runner.go:195] Run: systemctl --version
I0401 19:57:47.335289   25396 main.go:141] libmachine: (functional-366801) Calling .GetSSHHostname
I0401 19:57:47.337908   25396 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:47.338250   25396 main.go:141] libmachine: (functional-366801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4a:0f", ip: ""} in network mk-functional-366801: {Iface:virbr1 ExpiryTime:2025-04-01 20:55:15 +0000 UTC Type:0 Mac:52:54:00:9d:4a:0f Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:functional-366801 Clientid:01:52:54:00:9d:4a:0f}
I0401 19:57:47.338276   25396 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined IP address 192.168.39.138 and MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:47.338390   25396 main.go:141] libmachine: (functional-366801) Calling .GetSSHPort
I0401 19:57:47.338554   25396 main.go:141] libmachine: (functional-366801) Calling .GetSSHKeyPath
I0401 19:57:47.338714   25396 main.go:141] libmachine: (functional-366801) Calling .GetSSHUsername
I0401 19:57:47.338842   25396 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/functional-366801/id_rsa Username:docker}
I0401 19:57:47.420789   25396 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 19:57:47.467656   25396 main.go:141] libmachine: Making call to close driver server
I0401 19:57:47.467668   25396 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:47.467939   25396 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:47.467961   25396 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 19:57:47.467970   25396 main.go:141] libmachine: (functional-366801) DBG | Closing plugin on server side
I0401 19:57:47.467979   25396 main.go:141] libmachine: Making call to close driver server
I0401 19:57:47.467987   25396 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:47.468201   25396 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:47.468247   25396 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh pgrep buildkitd: exit status 1 (191.863088ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image build -t localhost/my-image:functional-366801 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 image build -t localhost/my-image:functional-366801 testdata/build --alsologtostderr: (8.150262878s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366801 image build -t localhost/my-image:functional-366801 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b3c396e829b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-366801
--> f87b840c346
Successfully tagged localhost/my-image:functional-366801
f87b840c34646d0bdf69637ea9e0143a85d68f63c7b29b321c825b588013e10c
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366801 image build -t localhost/my-image:functional-366801 testdata/build --alsologtostderr:
I0401 19:57:47.707170   25449 out.go:345] Setting OutFile to fd 1 ...
I0401 19:57:47.707422   25449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:47.707432   25449 out.go:358] Setting ErrFile to fd 2...
I0401 19:57:47.707436   25449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:57:47.707597   25449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
I0401 19:57:47.708114   25449 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:47.708609   25449 config.go:182] Loaded profile config "functional-366801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:57:47.708950   25449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:47.708998   25449 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:47.724308   25449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
I0401 19:57:47.724854   25449 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:47.725361   25449 main.go:141] libmachine: Using API Version  1
I0401 19:57:47.725381   25449 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:47.725807   25449 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:47.725980   25449 main.go:141] libmachine: (functional-366801) Calling .GetState
I0401 19:57:47.727874   25449 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 19:57:47.727915   25449 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 19:57:47.743732   25449 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
I0401 19:57:47.744144   25449 main.go:141] libmachine: () Calling .GetVersion
I0401 19:57:47.744481   25449 main.go:141] libmachine: Using API Version  1
I0401 19:57:47.744501   25449 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 19:57:47.744844   25449 main.go:141] libmachine: () Calling .GetMachineName
I0401 19:57:47.745005   25449 main.go:141] libmachine: (functional-366801) Calling .DriverName
I0401 19:57:47.745243   25449 ssh_runner.go:195] Run: systemctl --version
I0401 19:57:47.745271   25449 main.go:141] libmachine: (functional-366801) Calling .GetSSHHostname
I0401 19:57:47.747680   25449 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:47.748075   25449 main.go:141] libmachine: (functional-366801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:4a:0f", ip: ""} in network mk-functional-366801: {Iface:virbr1 ExpiryTime:2025-04-01 20:55:15 +0000 UTC Type:0 Mac:52:54:00:9d:4a:0f Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:functional-366801 Clientid:01:52:54:00:9d:4a:0f}
I0401 19:57:47.748104   25449 main.go:141] libmachine: (functional-366801) DBG | domain functional-366801 has defined IP address 192.168.39.138 and MAC address 52:54:00:9d:4a:0f in network mk-functional-366801
I0401 19:57:47.748173   25449 main.go:141] libmachine: (functional-366801) Calling .GetSSHPort
I0401 19:57:47.748303   25449 main.go:141] libmachine: (functional-366801) Calling .GetSSHKeyPath
I0401 19:57:47.748397   25449 main.go:141] libmachine: (functional-366801) Calling .GetSSHUsername
I0401 19:57:47.748496   25449 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/functional-366801/id_rsa Username:docker}
I0401 19:57:47.829479   25449 build_images.go:161] Building image from path: /tmp/build.440263690.tar
I0401 19:57:47.829558   25449 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0401 19:57:47.840863   25449 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.440263690.tar
I0401 19:57:47.845145   25449 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.440263690.tar: stat -c "%s %y" /var/lib/minikube/build/build.440263690.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.440263690.tar': No such file or directory
I0401 19:57:47.845172   25449 ssh_runner.go:362] scp /tmp/build.440263690.tar --> /var/lib/minikube/build/build.440263690.tar (3072 bytes)
I0401 19:57:47.900041   25449 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.440263690
I0401 19:57:47.910865   25449 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.440263690 -xf /var/lib/minikube/build/build.440263690.tar
I0401 19:57:47.927184   25449 crio.go:315] Building image: /var/lib/minikube/build/build.440263690
I0401 19:57:47.927255   25449 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-366801 /var/lib/minikube/build/build.440263690 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0401 19:57:55.777234   25449 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-366801 /var/lib/minikube/build/build.440263690 --cgroup-manager=cgroupfs: (7.849954116s)
I0401 19:57:55.777324   25449 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.440263690
I0401 19:57:55.792548   25449 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.440263690.tar
I0401 19:57:55.807489   25449 build_images.go:217] Built localhost/my-image:functional-366801 from /tmp/build.440263690.tar
I0401 19:57:55.807521   25449 build_images.go:133] succeeded building to: functional-366801
I0401 19:57:55.807528   25449 build_images.go:134] failed building to: 
I0401 19:57:55.807553   25449 main.go:141] libmachine: Making call to close driver server
I0401 19:57:55.807565   25449 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:55.807807   25449 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:55.807825   25449 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 19:57:55.807851   25449 main.go:141] libmachine: Making call to close driver server
I0401 19:57:55.807858   25449 main.go:141] libmachine: (functional-366801) Calling .Close
I0401 19:57:55.808142   25449 main.go:141] libmachine: Successfully made call to close driver server
I0401 19:57:55.808154   25449 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.937319238s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-366801
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "432.88219ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "49.318142ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdany-port3299190621/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1743537449536916520" to /tmp/TestFunctionalparallelMountCmdany-port3299190621/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1743537449536916520" to /tmp/TestFunctionalparallelMountCmdany-port3299190621/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1743537449536916520" to /tmp/TestFunctionalparallelMountCmdany-port3299190621/001/test-1743537449536916520
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.563911ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0401 19:57:29.779833   16301 retry.go:31] will retry after 735.622673ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  1 19:57 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  1 19:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  1 19:57 test-1743537449536916520
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh cat /mount-9p/test-1743537449536916520
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-366801 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [014f4ed8-e77e-42ce-94fe-574b3b3f1109] Pending
helpers_test.go:344: "busybox-mount" [014f4ed8-e77e-42ce-94fe-574b3b3f1109] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [014f4ed8-e77e-42ce-94fe-574b3b3f1109] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [014f4ed8-e77e-42ce-94fe-574b3b3f1109] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.003782764s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-366801 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdany-port3299190621/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "381.852142ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "47.719657ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image load --daemon kicbase/echo-server:functional-366801 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-366801 image load --daemon kicbase/echo-server:functional-366801 --alsologtostderr: (1.736947621s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image load --daemon kicbase/echo-server:functional-366801 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-366801
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image load --daemon kicbase/echo-server:functional-366801 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image save kicbase/echo-server:functional-366801 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image rm kicbase/echo-server:functional-366801 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-366801
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 image save --daemon kicbase/echo-server:functional-366801 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-366801
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 service list -o json
functional_test.go:1511: Took "449.857717ms" to run "out/minikube-linux-amd64 -p functional-366801 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdspecific-port113837565/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.265245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0401 19:57:39.750397   16301 retry.go:31] will retry after 639.498578ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdspecific-port113837565/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh "sudo umount -f /mount-9p": exit status 1 (276.009286ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-366801 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdspecific-port113837565/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.138:30614
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.138:30614
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdVerifyCleanup135141630/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdVerifyCleanup135141630/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdVerifyCleanup135141630/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T" /mount1: exit status 1 (400.554053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0401 19:57:41.979369   16301 retry.go:31] will retry after 278.046039ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366801 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-366801 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdVerifyCleanup135141630/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdVerifyCleanup135141630/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366801 /tmp/TestFunctionalparallelMountCmdVerifyCleanup135141630/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-366801
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-366801
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-366801
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-173501 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0401 19:59:06.657586   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:34.362821   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-173501 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m34.875343485s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (215.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-173501 -- rollout status deployment/busybox: (5.208655231s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-9c5nb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-fkfwp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-q7lwv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-9c5nb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-fkfwp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-q7lwv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-9c5nb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-fkfwp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-q7lwv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-9c5nb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-9c5nb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-fkfwp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-fkfwp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-q7lwv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-173501 -- exec busybox-58667487b6-q7lwv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-173501 -v=7 --alsologtostderr
E0401 20:02:27.799553   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:27.806101   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:27.817563   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:27.838995   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:27.880445   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:27.961933   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:28.123422   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:28.445162   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:29.086756   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:30.368268   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:32.930369   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:38.052661   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:02:48.294003   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-173501 -v=7 --alsologtostderr: (57.955015946s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-173501 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status --output json -v=7 --alsologtostderr
E0401 20:03:08.776227   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp testdata/cp-test.txt ha-173501:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3720341960/001/cp-test_ha-173501.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501:/home/docker/cp-test.txt ha-173501-m02:/home/docker/cp-test_ha-173501_ha-173501-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test_ha-173501_ha-173501-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501:/home/docker/cp-test.txt ha-173501-m03:/home/docker/cp-test_ha-173501_ha-173501-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test_ha-173501_ha-173501-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501:/home/docker/cp-test.txt ha-173501-m04:/home/docker/cp-test_ha-173501_ha-173501-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test_ha-173501_ha-173501-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp testdata/cp-test.txt ha-173501-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3720341960/001/cp-test_ha-173501-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m02:/home/docker/cp-test.txt ha-173501:/home/docker/cp-test_ha-173501-m02_ha-173501.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test_ha-173501-m02_ha-173501.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m02:/home/docker/cp-test.txt ha-173501-m03:/home/docker/cp-test_ha-173501-m02_ha-173501-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test_ha-173501-m02_ha-173501-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m02:/home/docker/cp-test.txt ha-173501-m04:/home/docker/cp-test_ha-173501-m02_ha-173501-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test_ha-173501-m02_ha-173501-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp testdata/cp-test.txt ha-173501-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3720341960/001/cp-test_ha-173501-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m03:/home/docker/cp-test.txt ha-173501:/home/docker/cp-test_ha-173501-m03_ha-173501.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test_ha-173501-m03_ha-173501.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m03:/home/docker/cp-test.txt ha-173501-m02:/home/docker/cp-test_ha-173501-m03_ha-173501-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test_ha-173501-m03_ha-173501-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m03:/home/docker/cp-test.txt ha-173501-m04:/home/docker/cp-test_ha-173501-m03_ha-173501-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test_ha-173501-m03_ha-173501-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp testdata/cp-test.txt ha-173501-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3720341960/001/cp-test_ha-173501-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m04:/home/docker/cp-test.txt ha-173501:/home/docker/cp-test_ha-173501-m04_ha-173501.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501 "sudo cat /home/docker/cp-test_ha-173501-m04_ha-173501.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m04:/home/docker/cp-test.txt ha-173501-m02:/home/docker/cp-test_ha-173501-m04_ha-173501-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m02 "sudo cat /home/docker/cp-test_ha-173501-m04_ha-173501-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 cp ha-173501-m04:/home/docker/cp-test.txt ha-173501-m03:/home/docker/cp-test_ha-173501-m04_ha-173501-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 ssh -n ha-173501-m03 "sudo cat /home/docker/cp-test_ha-173501-m04_ha-173501-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 node stop m02 -v=7 --alsologtostderr
E0401 20:03:49.737549   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:04:06.657477   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-173501 node stop m02 -v=7 --alsologtostderr: (1m30.981102746s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr: exit status 7 (672.72627ms)

                                                
                                                
-- stdout --
	ha-173501
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-173501-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-173501-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-173501-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:04:52.900970   30403 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:04:52.901216   30403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:04:52.901226   30403 out.go:358] Setting ErrFile to fd 2...
	I0401 20:04:52.901230   30403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:04:52.901435   30403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:04:52.901637   30403 out.go:352] Setting JSON to false
	I0401 20:04:52.901668   30403 mustload.go:65] Loading cluster: ha-173501
	I0401 20:04:52.901768   30403 notify.go:220] Checking for updates...
	I0401 20:04:52.902242   30403 config.go:182] Loaded profile config "ha-173501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:04:52.902271   30403 status.go:174] checking status of ha-173501 ...
	I0401 20:04:52.902768   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:52.902814   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:52.919219   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39183
	I0401 20:04:52.919685   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:52.920263   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:52.920286   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:52.920719   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:52.920990   30403 main.go:141] libmachine: (ha-173501) Calling .GetState
	I0401 20:04:52.923182   30403 status.go:371] ha-173501 host status = "Running" (err=<nil>)
	I0401 20:04:52.923201   30403 host.go:66] Checking if "ha-173501" exists ...
	I0401 20:04:52.923643   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:52.923708   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:52.940944   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0401 20:04:52.941408   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:52.941850   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:52.941877   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:52.942402   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:52.942615   30403 main.go:141] libmachine: (ha-173501) Calling .GetIP
	I0401 20:04:52.945660   30403 main.go:141] libmachine: (ha-173501) DBG | domain ha-173501 has defined MAC address 52:54:00:84:d7:17 in network mk-ha-173501
	I0401 20:04:52.946171   30403 main.go:141] libmachine: (ha-173501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:d7:17", ip: ""} in network mk-ha-173501: {Iface:virbr1 ExpiryTime:2025-04-01 20:58:40 +0000 UTC Type:0 Mac:52:54:00:84:d7:17 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ha-173501 Clientid:01:52:54:00:84:d7:17}
	I0401 20:04:52.946206   30403 main.go:141] libmachine: (ha-173501) DBG | domain ha-173501 has defined IP address 192.168.39.8 and MAC address 52:54:00:84:d7:17 in network mk-ha-173501
	I0401 20:04:52.946409   30403 host.go:66] Checking if "ha-173501" exists ...
	I0401 20:04:52.946708   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:52.946763   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:52.961905   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33849
	I0401 20:04:52.962388   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:52.962844   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:52.962869   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:52.963235   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:52.963419   30403 main.go:141] libmachine: (ha-173501) Calling .DriverName
	I0401 20:04:52.963656   30403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:04:52.963696   30403 main.go:141] libmachine: (ha-173501) Calling .GetSSHHostname
	I0401 20:04:52.966742   30403 main.go:141] libmachine: (ha-173501) DBG | domain ha-173501 has defined MAC address 52:54:00:84:d7:17 in network mk-ha-173501
	I0401 20:04:52.967171   30403 main.go:141] libmachine: (ha-173501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:d7:17", ip: ""} in network mk-ha-173501: {Iface:virbr1 ExpiryTime:2025-04-01 20:58:40 +0000 UTC Type:0 Mac:52:54:00:84:d7:17 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ha-173501 Clientid:01:52:54:00:84:d7:17}
	I0401 20:04:52.967208   30403 main.go:141] libmachine: (ha-173501) DBG | domain ha-173501 has defined IP address 192.168.39.8 and MAC address 52:54:00:84:d7:17 in network mk-ha-173501
	I0401 20:04:52.967325   30403 main.go:141] libmachine: (ha-173501) Calling .GetSSHPort
	I0401 20:04:52.967471   30403 main.go:141] libmachine: (ha-173501) Calling .GetSSHKeyPath
	I0401 20:04:52.967617   30403 main.go:141] libmachine: (ha-173501) Calling .GetSSHUsername
	I0401 20:04:52.967774   30403 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/ha-173501/id_rsa Username:docker}
	I0401 20:04:53.063063   30403 ssh_runner.go:195] Run: systemctl --version
	I0401 20:04:53.071242   30403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:04:53.091290   30403 kubeconfig.go:125] found "ha-173501" server: "https://192.168.39.254:8443"
	I0401 20:04:53.091325   30403 api_server.go:166] Checking apiserver status ...
	I0401 20:04:53.091365   30403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:04:53.107772   30403 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0401 20:04:53.119164   30403 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:04:53.119217   30403 ssh_runner.go:195] Run: ls
	I0401 20:04:53.124278   30403 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 20:04:53.128996   30403 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 20:04:53.129021   30403 status.go:463] ha-173501 apiserver status = Running (err=<nil>)
	I0401 20:04:53.129031   30403 status.go:176] ha-173501 status: &{Name:ha-173501 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:04:53.129046   30403 status.go:174] checking status of ha-173501-m02 ...
	I0401 20:04:53.129386   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:53.129419   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:53.144793   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0401 20:04:53.145273   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:53.145821   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:53.145851   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:53.146300   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:53.146538   30403 main.go:141] libmachine: (ha-173501-m02) Calling .GetState
	I0401 20:04:53.148219   30403 status.go:371] ha-173501-m02 host status = "Stopped" (err=<nil>)
	I0401 20:04:53.148234   30403 status.go:384] host is not running, skipping remaining checks
	I0401 20:04:53.148242   30403 status.go:176] ha-173501-m02 status: &{Name:ha-173501-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:04:53.148261   30403 status.go:174] checking status of ha-173501-m03 ...
	I0401 20:04:53.148588   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:53.148625   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:53.163765   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0401 20:04:53.164359   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:53.164869   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:53.164893   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:53.165211   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:53.165388   30403 main.go:141] libmachine: (ha-173501-m03) Calling .GetState
	I0401 20:04:53.167090   30403 status.go:371] ha-173501-m03 host status = "Running" (err=<nil>)
	I0401 20:04:53.167105   30403 host.go:66] Checking if "ha-173501-m03" exists ...
	I0401 20:04:53.167427   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:53.167465   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:53.183936   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0401 20:04:53.184389   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:53.184988   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:53.185015   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:53.185368   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:53.185570   30403 main.go:141] libmachine: (ha-173501-m03) Calling .GetIP
	I0401 20:04:53.188332   30403 main.go:141] libmachine: (ha-173501-m03) DBG | domain ha-173501-m03 has defined MAC address 52:54:00:31:ca:af in network mk-ha-173501
	I0401 20:04:53.188867   30403 main.go:141] libmachine: (ha-173501-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ca:af", ip: ""} in network mk-ha-173501: {Iface:virbr1 ExpiryTime:2025-04-01 21:00:52 +0000 UTC Type:0 Mac:52:54:00:31:ca:af Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-173501-m03 Clientid:01:52:54:00:31:ca:af}
	I0401 20:04:53.188898   30403 main.go:141] libmachine: (ha-173501-m03) DBG | domain ha-173501-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:31:ca:af in network mk-ha-173501
	I0401 20:04:53.189077   30403 host.go:66] Checking if "ha-173501-m03" exists ...
	I0401 20:04:53.189379   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:53.189419   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:53.204106   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0401 20:04:53.204571   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:53.205020   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:53.205041   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:53.205324   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:53.205513   30403 main.go:141] libmachine: (ha-173501-m03) Calling .DriverName
	I0401 20:04:53.205751   30403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:04:53.205782   30403 main.go:141] libmachine: (ha-173501-m03) Calling .GetSSHHostname
	I0401 20:04:53.208609   30403 main.go:141] libmachine: (ha-173501-m03) DBG | domain ha-173501-m03 has defined MAC address 52:54:00:31:ca:af in network mk-ha-173501
	I0401 20:04:53.209031   30403 main.go:141] libmachine: (ha-173501-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ca:af", ip: ""} in network mk-ha-173501: {Iface:virbr1 ExpiryTime:2025-04-01 21:00:52 +0000 UTC Type:0 Mac:52:54:00:31:ca:af Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-173501-m03 Clientid:01:52:54:00:31:ca:af}
	I0401 20:04:53.209056   30403 main.go:141] libmachine: (ha-173501-m03) DBG | domain ha-173501-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:31:ca:af in network mk-ha-173501
	I0401 20:04:53.209178   30403 main.go:141] libmachine: (ha-173501-m03) Calling .GetSSHPort
	I0401 20:04:53.209376   30403 main.go:141] libmachine: (ha-173501-m03) Calling .GetSSHKeyPath
	I0401 20:04:53.209522   30403 main.go:141] libmachine: (ha-173501-m03) Calling .GetSSHUsername
	I0401 20:04:53.209639   30403 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/ha-173501-m03/id_rsa Username:docker}
	I0401 20:04:53.295862   30403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:04:53.314485   30403 kubeconfig.go:125] found "ha-173501" server: "https://192.168.39.254:8443"
	I0401 20:04:53.314523   30403 api_server.go:166] Checking apiserver status ...
	I0401 20:04:53.314566   30403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:04:53.330829   30403 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	W0401 20:04:53.341277   30403 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:04:53.341334   30403 ssh_runner.go:195] Run: ls
	I0401 20:04:53.345931   30403 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 20:04:53.350312   30403 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 20:04:53.350334   30403 status.go:463] ha-173501-m03 apiserver status = Running (err=<nil>)
	I0401 20:04:53.350342   30403 status.go:176] ha-173501-m03 status: &{Name:ha-173501-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:04:53.350355   30403 status.go:174] checking status of ha-173501-m04 ...
	I0401 20:04:53.350644   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:53.350677   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:53.368174   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I0401 20:04:53.368585   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:53.368987   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:53.369011   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:53.369369   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:53.369578   30403 main.go:141] libmachine: (ha-173501-m04) Calling .GetState
	I0401 20:04:53.371206   30403 status.go:371] ha-173501-m04 host status = "Running" (err=<nil>)
	I0401 20:04:53.371221   30403 host.go:66] Checking if "ha-173501-m04" exists ...
	I0401 20:04:53.371489   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:53.371525   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:53.386358   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0401 20:04:53.386828   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:53.387455   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:53.387470   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:53.387785   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:53.387985   30403 main.go:141] libmachine: (ha-173501-m04) Calling .GetIP
	I0401 20:04:53.391184   30403 main.go:141] libmachine: (ha-173501-m04) DBG | domain ha-173501-m04 has defined MAC address 52:54:00:6c:a1:68 in network mk-ha-173501
	I0401 20:04:53.391656   30403 main.go:141] libmachine: (ha-173501-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:a1:68", ip: ""} in network mk-ha-173501: {Iface:virbr1 ExpiryTime:2025-04-01 21:02:25 +0000 UTC Type:0 Mac:52:54:00:6c:a1:68 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-173501-m04 Clientid:01:52:54:00:6c:a1:68}
	I0401 20:04:53.391689   30403 main.go:141] libmachine: (ha-173501-m04) DBG | domain ha-173501-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:6c:a1:68 in network mk-ha-173501
	I0401 20:04:53.391840   30403 host.go:66] Checking if "ha-173501-m04" exists ...
	I0401 20:04:53.392346   30403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:04:53.392403   30403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:04:53.408232   30403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0401 20:04:53.408624   30403 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:04:53.409077   30403 main.go:141] libmachine: Using API Version  1
	I0401 20:04:53.409103   30403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:04:53.409404   30403 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:04:53.409613   30403 main.go:141] libmachine: (ha-173501-m04) Calling .DriverName
	I0401 20:04:53.409830   30403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:04:53.409852   30403 main.go:141] libmachine: (ha-173501-m04) Calling .GetSSHHostname
	I0401 20:04:53.412628   30403 main.go:141] libmachine: (ha-173501-m04) DBG | domain ha-173501-m04 has defined MAC address 52:54:00:6c:a1:68 in network mk-ha-173501
	I0401 20:04:53.413128   30403 main.go:141] libmachine: (ha-173501-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:a1:68", ip: ""} in network mk-ha-173501: {Iface:virbr1 ExpiryTime:2025-04-01 21:02:25 +0000 UTC Type:0 Mac:52:54:00:6c:a1:68 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-173501-m04 Clientid:01:52:54:00:6c:a1:68}
	I0401 20:04:53.413158   30403 main.go:141] libmachine: (ha-173501-m04) DBG | domain ha-173501-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:6c:a1:68 in network mk-ha-173501
	I0401 20:04:53.413309   30403 main.go:141] libmachine: (ha-173501-m04) Calling .GetSSHPort
	I0401 20:04:53.413491   30403 main.go:141] libmachine: (ha-173501-m04) Calling .GetSSHKeyPath
	I0401 20:04:53.413645   30403 main.go:141] libmachine: (ha-173501-m04) Calling .GetSSHUsername
	I0401 20:04:53.413770   30403 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/ha-173501-m04/id_rsa Username:docker}
	I0401 20:04:53.506928   30403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:04:53.525689   30403 status.go:176] ha-173501-m04 status: &{Name:ha-173501-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 node start m02 -v=7 --alsologtostderr
E0401 20:05:11.659243   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-173501 node start m02 -v=7 --alsologtostderr: (56.28971021s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (57.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (436.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-173501 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-173501 -v=7 --alsologtostderr
E0401 20:07:27.799369   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:07:55.500810   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:09:06.658266   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-173501 -v=7 --alsologtostderr: (4m34.119218194s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-173501 --wait=true -v=7 --alsologtostderr
E0401 20:10:29.724583   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:12:27.799682   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-173501 --wait=true -v=7 --alsologtostderr: (2m42.701758661s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-173501
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (436.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-173501 node delete m03 -v=7 --alsologtostderr: (17.993379947s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 stop -v=7 --alsologtostderr
E0401 20:14:06.657860   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:17:27.799485   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-173501 stop -v=7 --alsologtostderr: (4m32.812323247s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr: exit status 7 (102.515178ms)

                                                
                                                
-- stdout --
	ha-173501
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-173501-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-173501-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:18:01.527563   35160 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:18:01.527810   35160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:18:01.527818   35160 out.go:358] Setting ErrFile to fd 2...
	I0401 20:18:01.527823   35160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:18:01.527995   35160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:18:01.528146   35160 out.go:352] Setting JSON to false
	I0401 20:18:01.528171   35160 mustload.go:65] Loading cluster: ha-173501
	I0401 20:18:01.528264   35160 notify.go:220] Checking for updates...
	I0401 20:18:01.528603   35160 config.go:182] Loaded profile config "ha-173501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:18:01.528633   35160 status.go:174] checking status of ha-173501 ...
	I0401 20:18:01.529147   35160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:18:01.529202   35160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:18:01.544500   35160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39645
	I0401 20:18:01.544955   35160 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:18:01.545477   35160 main.go:141] libmachine: Using API Version  1
	I0401 20:18:01.545503   35160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:18:01.546014   35160 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:18:01.546287   35160 main.go:141] libmachine: (ha-173501) Calling .GetState
	I0401 20:18:01.548273   35160 status.go:371] ha-173501 host status = "Stopped" (err=<nil>)
	I0401 20:18:01.548293   35160 status.go:384] host is not running, skipping remaining checks
	I0401 20:18:01.548300   35160 status.go:176] ha-173501 status: &{Name:ha-173501 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:18:01.548339   35160 status.go:174] checking status of ha-173501-m02 ...
	I0401 20:18:01.548697   35160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:18:01.548736   35160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:18:01.563774   35160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0401 20:18:01.564242   35160 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:18:01.564693   35160 main.go:141] libmachine: Using API Version  1
	I0401 20:18:01.564715   35160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:18:01.565092   35160 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:18:01.565255   35160 main.go:141] libmachine: (ha-173501-m02) Calling .GetState
	I0401 20:18:01.566677   35160 status.go:371] ha-173501-m02 host status = "Stopped" (err=<nil>)
	I0401 20:18:01.566689   35160 status.go:384] host is not running, skipping remaining checks
	I0401 20:18:01.566695   35160 status.go:176] ha-173501-m02 status: &{Name:ha-173501-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:18:01.566720   35160 status.go:174] checking status of ha-173501-m04 ...
	I0401 20:18:01.567008   35160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:18:01.567039   35160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:18:01.582664   35160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0401 20:18:01.583156   35160 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:18:01.583562   35160 main.go:141] libmachine: Using API Version  1
	I0401 20:18:01.583580   35160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:18:01.583903   35160 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:18:01.584071   35160 main.go:141] libmachine: (ha-173501-m04) Calling .GetState
	I0401 20:18:01.585682   35160 status.go:371] ha-173501-m04 host status = "Stopped" (err=<nil>)
	I0401 20:18:01.585694   35160 status.go:384] host is not running, skipping remaining checks
	I0401 20:18:01.585699   35160 status.go:176] ha-173501-m04 status: &{Name:ha-173501-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (123.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-173501 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0401 20:18:50.862630   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:19:06.658077   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-173501 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m2.449791731s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (123.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-173501 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-173501 --control-plane -v=7 --alsologtostderr: (1m18.753994638s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-173501 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.65s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-305920 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-305920 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (51.646080088s)
--- PASS: TestJSONOutput/start/Command (51.65s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-305920 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-305920 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-305920 --output=json --user=testUser
E0401 20:22:27.799486   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-305920 --output=json --user=testUser: (7.394734705s)
--- PASS: TestJSONOutput/stop/Command (7.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-050958 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-050958 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.83492ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4e1d7a9-9582-41ea-ad02-fa5b73a3a4d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-050958] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73068eee-8efc-49ea-9d1f-5c4826e5f24d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20506"}}
	{"specversion":"1.0","id":"ce77e2fe-c1d9-4684-8ae0-336654a805b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5995cc94-eb2c-4da5-9d90-ae050a6a8023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig"}}
	{"specversion":"1.0","id":"eac8afa0-a5b6-46dc-b399-76be3681511e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube"}}
	{"specversion":"1.0","id":"3ca474c7-898d-4e8e-8798-667a0ce54809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6a503191-aac4-4400-9e87-081155fc9067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a86f48b7-3c97-4bbd-9d42-f392d049a984","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-050958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-050958
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-274173 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-274173 --driver=kvm2  --container-runtime=crio: (42.849960958s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-285040 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-285040 --driver=kvm2  --container-runtime=crio: (42.995257629s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-274173
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-285040
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-285040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-285040
helpers_test.go:175: Cleaning up "first-274173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-274173
--- PASS: TestMinikubeProfile (88.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-663711 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0401 20:24:06.662826   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-663711 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.657057191s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-663711 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-663711 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-684270 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-684270 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.79713255s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684270 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684270 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-663711 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684270 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684270 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-684270
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-684270: (1.287534662s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-684270
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-684270: (22.571767386s)
--- PASS: TestMountStart/serial/RestartStopped (23.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684270 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-684270 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546775 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0401 20:27:09.726621   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546775 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m57.267833373s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-546775 -- rollout status deployment/busybox: (4.435116867s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-pk2qz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-wpnzr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-pk2qz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-wpnzr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-pk2qz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-wpnzr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-pk2qz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-pk2qz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-wpnzr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0401 20:27:27.799031   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-546775 -- exec busybox-58667487b6-wpnzr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-546775 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-546775 -v 3 --alsologtostderr: (51.299231116s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-546775 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp testdata/cp-test.txt multinode-546775:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2325078635/001/cp-test_multinode-546775.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775:/home/docker/cp-test.txt multinode-546775-m02:/home/docker/cp-test_multinode-546775_multinode-546775-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m02 "sudo cat /home/docker/cp-test_multinode-546775_multinode-546775-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775:/home/docker/cp-test.txt multinode-546775-m03:/home/docker/cp-test_multinode-546775_multinode-546775-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m03 "sudo cat /home/docker/cp-test_multinode-546775_multinode-546775-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp testdata/cp-test.txt multinode-546775-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2325078635/001/cp-test_multinode-546775-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775-m02:/home/docker/cp-test.txt multinode-546775:/home/docker/cp-test_multinode-546775-m02_multinode-546775.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775 "sudo cat /home/docker/cp-test_multinode-546775-m02_multinode-546775.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775-m02:/home/docker/cp-test.txt multinode-546775-m03:/home/docker/cp-test_multinode-546775-m02_multinode-546775-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m03 "sudo cat /home/docker/cp-test_multinode-546775-m02_multinode-546775-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp testdata/cp-test.txt multinode-546775-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2325078635/001/cp-test_multinode-546775-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775-m03:/home/docker/cp-test.txt multinode-546775:/home/docker/cp-test_multinode-546775-m03_multinode-546775.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775 "sudo cat /home/docker/cp-test_multinode-546775-m03_multinode-546775.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 cp multinode-546775-m03:/home/docker/cp-test.txt multinode-546775-m02:/home/docker/cp-test_multinode-546775-m03_multinode-546775-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 ssh -n multinode-546775-m02 "sudo cat /home/docker/cp-test_multinode-546775-m03_multinode-546775-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-546775 node stop m03: (1.507212077s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546775 status: exit status 7 (431.096717ms)

                                                
                                                
-- stdout --
	multinode-546775
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-546775-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-546775-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr: exit status 7 (444.908372ms)

                                                
                                                
-- stdout --
	multinode-546775
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-546775-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-546775-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:28:30.028703   42877 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:28:30.028979   42877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:28:30.028990   42877 out.go:358] Setting ErrFile to fd 2...
	I0401 20:28:30.028997   42877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:28:30.029221   42877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:28:30.029397   42877 out.go:352] Setting JSON to false
	I0401 20:28:30.029438   42877 mustload.go:65] Loading cluster: multinode-546775
	I0401 20:28:30.029539   42877 notify.go:220] Checking for updates...
	I0401 20:28:30.029872   42877 config.go:182] Loaded profile config "multinode-546775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:28:30.029898   42877 status.go:174] checking status of multinode-546775 ...
	I0401 20:28:30.030353   42877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:28:30.030416   42877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:28:30.049630   42877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39329
	I0401 20:28:30.050074   42877 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:28:30.050707   42877 main.go:141] libmachine: Using API Version  1
	I0401 20:28:30.050737   42877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:28:30.051166   42877 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:28:30.051407   42877 main.go:141] libmachine: (multinode-546775) Calling .GetState
	I0401 20:28:30.053151   42877 status.go:371] multinode-546775 host status = "Running" (err=<nil>)
	I0401 20:28:30.053168   42877 host.go:66] Checking if "multinode-546775" exists ...
	I0401 20:28:30.053601   42877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:28:30.053654   42877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:28:30.069960   42877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43215
	I0401 20:28:30.070553   42877 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:28:30.071140   42877 main.go:141] libmachine: Using API Version  1
	I0401 20:28:30.071178   42877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:28:30.071548   42877 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:28:30.071758   42877 main.go:141] libmachine: (multinode-546775) Calling .GetIP
	I0401 20:28:30.075189   42877 main.go:141] libmachine: (multinode-546775) DBG | domain multinode-546775 has defined MAC address 52:54:00:f5:70:d7 in network mk-multinode-546775
	I0401 20:28:30.075719   42877 main.go:141] libmachine: (multinode-546775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:70:d7", ip: ""} in network mk-multinode-546775: {Iface:virbr1 ExpiryTime:2025-04-01 21:25:39 +0000 UTC Type:0 Mac:52:54:00:f5:70:d7 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-546775 Clientid:01:52:54:00:f5:70:d7}
	I0401 20:28:30.075750   42877 main.go:141] libmachine: (multinode-546775) DBG | domain multinode-546775 has defined IP address 192.168.39.64 and MAC address 52:54:00:f5:70:d7 in network mk-multinode-546775
	I0401 20:28:30.075947   42877 host.go:66] Checking if "multinode-546775" exists ...
	I0401 20:28:30.076276   42877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:28:30.076330   42877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:28:30.093460   42877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36519
	I0401 20:28:30.094017   42877 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:28:30.094537   42877 main.go:141] libmachine: Using API Version  1
	I0401 20:28:30.094561   42877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:28:30.094914   42877 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:28:30.095075   42877 main.go:141] libmachine: (multinode-546775) Calling .DriverName
	I0401 20:28:30.095223   42877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:28:30.095255   42877 main.go:141] libmachine: (multinode-546775) Calling .GetSSHHostname
	I0401 20:28:30.097985   42877 main.go:141] libmachine: (multinode-546775) DBG | domain multinode-546775 has defined MAC address 52:54:00:f5:70:d7 in network mk-multinode-546775
	I0401 20:28:30.098566   42877 main.go:141] libmachine: (multinode-546775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:70:d7", ip: ""} in network mk-multinode-546775: {Iface:virbr1 ExpiryTime:2025-04-01 21:25:39 +0000 UTC Type:0 Mac:52:54:00:f5:70:d7 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-546775 Clientid:01:52:54:00:f5:70:d7}
	I0401 20:28:30.098588   42877 main.go:141] libmachine: (multinode-546775) DBG | domain multinode-546775 has defined IP address 192.168.39.64 and MAC address 52:54:00:f5:70:d7 in network mk-multinode-546775
	I0401 20:28:30.099196   42877 main.go:141] libmachine: (multinode-546775) Calling .GetSSHPort
	I0401 20:28:30.099445   42877 main.go:141] libmachine: (multinode-546775) Calling .GetSSHKeyPath
	I0401 20:28:30.099687   42877 main.go:141] libmachine: (multinode-546775) Calling .GetSSHUsername
	I0401 20:28:30.099864   42877 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/multinode-546775/id_rsa Username:docker}
	I0401 20:28:30.182258   42877 ssh_runner.go:195] Run: systemctl --version
	I0401 20:28:30.189052   42877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:28:30.205567   42877 kubeconfig.go:125] found "multinode-546775" server: "https://192.168.39.64:8443"
	I0401 20:28:30.205604   42877 api_server.go:166] Checking apiserver status ...
	I0401 20:28:30.205633   42877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:28:30.219851   42877 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup
	W0401 20:28:30.230676   42877 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:28:30.230739   42877 ssh_runner.go:195] Run: ls
	I0401 20:28:30.235655   42877 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I0401 20:28:30.240392   42877 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I0401 20:28:30.240430   42877 status.go:463] multinode-546775 apiserver status = Running (err=<nil>)
	I0401 20:28:30.240442   42877 status.go:176] multinode-546775 status: &{Name:multinode-546775 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:28:30.240461   42877 status.go:174] checking status of multinode-546775-m02 ...
	I0401 20:28:30.240797   42877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:28:30.240856   42877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:28:30.256895   42877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0401 20:28:30.257366   42877 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:28:30.257840   42877 main.go:141] libmachine: Using API Version  1
	I0401 20:28:30.257861   42877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:28:30.258182   42877 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:28:30.258369   42877 main.go:141] libmachine: (multinode-546775-m02) Calling .GetState
	I0401 20:28:30.259974   42877 status.go:371] multinode-546775-m02 host status = "Running" (err=<nil>)
	I0401 20:28:30.259990   42877 host.go:66] Checking if "multinode-546775-m02" exists ...
	I0401 20:28:30.260291   42877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:28:30.260336   42877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:28:30.277206   42877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
	I0401 20:28:30.277682   42877 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:28:30.278185   42877 main.go:141] libmachine: Using API Version  1
	I0401 20:28:30.278228   42877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:28:30.278594   42877 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:28:30.278780   42877 main.go:141] libmachine: (multinode-546775-m02) Calling .GetIP
	I0401 20:28:30.282101   42877 main.go:141] libmachine: (multinode-546775-m02) DBG | domain multinode-546775-m02 has defined MAC address 52:54:00:91:c5:fa in network mk-multinode-546775
	I0401 20:28:30.282841   42877 main.go:141] libmachine: (multinode-546775-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:c5:fa", ip: ""} in network mk-multinode-546775: {Iface:virbr1 ExpiryTime:2025-04-01 21:26:43 +0000 UTC Type:0 Mac:52:54:00:91:c5:fa Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-546775-m02 Clientid:01:52:54:00:91:c5:fa}
	I0401 20:28:30.282868   42877 main.go:141] libmachine: (multinode-546775-m02) DBG | domain multinode-546775-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:91:c5:fa in network mk-multinode-546775
	I0401 20:28:30.283103   42877 host.go:66] Checking if "multinode-546775-m02" exists ...
	I0401 20:28:30.283410   42877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:28:30.283446   42877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:28:30.299057   42877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0401 20:28:30.299507   42877 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:28:30.299898   42877 main.go:141] libmachine: Using API Version  1
	I0401 20:28:30.299915   42877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:28:30.300223   42877 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:28:30.300378   42877 main.go:141] libmachine: (multinode-546775-m02) Calling .DriverName
	I0401 20:28:30.300534   42877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:28:30.300558   42877 main.go:141] libmachine: (multinode-546775-m02) Calling .GetSSHHostname
	I0401 20:28:30.303204   42877 main.go:141] libmachine: (multinode-546775-m02) DBG | domain multinode-546775-m02 has defined MAC address 52:54:00:91:c5:fa in network mk-multinode-546775
	I0401 20:28:30.303679   42877 main.go:141] libmachine: (multinode-546775-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:c5:fa", ip: ""} in network mk-multinode-546775: {Iface:virbr1 ExpiryTime:2025-04-01 21:26:43 +0000 UTC Type:0 Mac:52:54:00:91:c5:fa Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-546775-m02 Clientid:01:52:54:00:91:c5:fa}
	I0401 20:28:30.303705   42877 main.go:141] libmachine: (multinode-546775-m02) DBG | domain multinode-546775-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:91:c5:fa in network mk-multinode-546775
	I0401 20:28:30.303877   42877 main.go:141] libmachine: (multinode-546775-m02) Calling .GetSSHPort
	I0401 20:28:30.304039   42877 main.go:141] libmachine: (multinode-546775-m02) Calling .GetSSHKeyPath
	I0401 20:28:30.304243   42877 main.go:141] libmachine: (multinode-546775-m02) Calling .GetSSHUsername
	I0401 20:28:30.304386   42877 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20506-9129/.minikube/machines/multinode-546775-m02/id_rsa Username:docker}
	I0401 20:28:30.386537   42877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:28:30.401242   42877 status.go:176] multinode-546775-m02 status: &{Name:multinode-546775-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:28:30.401272   42877 status.go:174] checking status of multinode-546775-m03 ...
	I0401 20:28:30.401603   42877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:28:30.401657   42877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:28:30.417274   42877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0401 20:28:30.417666   42877 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:28:30.418116   42877 main.go:141] libmachine: Using API Version  1
	I0401 20:28:30.418135   42877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:28:30.418564   42877 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:28:30.418766   42877 main.go:141] libmachine: (multinode-546775-m03) Calling .GetState
	I0401 20:28:30.420431   42877 status.go:371] multinode-546775-m03 host status = "Stopped" (err=<nil>)
	I0401 20:28:30.420448   42877 status.go:384] host is not running, skipping remaining checks
	I0401 20:28:30.420455   42877 status.go:176] multinode-546775-m03 status: &{Name:multinode-546775-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 node start m03 -v=7 --alsologtostderr
E0401 20:29:06.658543   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-546775 node start m03 -v=7 --alsologtostderr: (40.764042113s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (349.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-546775
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-546775
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-546775: (3m3.148609957s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546775 --wait=true -v=8 --alsologtostderr
E0401 20:32:27.799612   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:06.657482   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546775 --wait=true -v=8 --alsologtostderr: (2m46.738190658s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-546775
--- PASS: TestMultiNode/serial/RestartKeepsNodes (349.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-546775 node delete m03: (2.209116278s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 stop
E0401 20:35:30.866350   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:37:27.799701   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-546775 stop: (3m1.697199671s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546775 status: exit status 7 (82.840092ms)

                                                
                                                
-- stdout --
	multinode-546775
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-546775-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr: exit status 7 (79.96001ms)

                                                
                                                
-- stdout --
	multinode-546775
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-546775-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:38:06.367196   45958 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:06.367470   45958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:06.367481   45958 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:06.367485   45958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:06.367678   45958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:38:06.367845   45958 out.go:352] Setting JSON to false
	I0401 20:38:06.367873   45958 mustload.go:65] Loading cluster: multinode-546775
	I0401 20:38:06.368003   45958 notify.go:220] Checking for updates...
	I0401 20:38:06.368353   45958 config.go:182] Loaded profile config "multinode-546775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:06.368381   45958 status.go:174] checking status of multinode-546775 ...
	I0401 20:38:06.369356   45958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:38:06.369443   45958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:38:06.385076   45958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44257
	I0401 20:38:06.385476   45958 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:38:06.386050   45958 main.go:141] libmachine: Using API Version  1
	I0401 20:38:06.386073   45958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:38:06.386503   45958 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:38:06.386731   45958 main.go:141] libmachine: (multinode-546775) Calling .GetState
	I0401 20:38:06.388231   45958 status.go:371] multinode-546775 host status = "Stopped" (err=<nil>)
	I0401 20:38:06.388244   45958 status.go:384] host is not running, skipping remaining checks
	I0401 20:38:06.388251   45958 status.go:176] multinode-546775 status: &{Name:multinode-546775 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:38:06.388294   45958 status.go:174] checking status of multinode-546775-m02 ...
	I0401 20:38:06.388600   45958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 20:38:06.388644   45958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 20:38:06.403068   45958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
	I0401 20:38:06.403407   45958 main.go:141] libmachine: () Calling .GetVersion
	I0401 20:38:06.403781   45958 main.go:141] libmachine: Using API Version  1
	I0401 20:38:06.403800   45958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 20:38:06.404144   45958 main.go:141] libmachine: () Calling .GetMachineName
	I0401 20:38:06.404322   45958 main.go:141] libmachine: (multinode-546775-m02) Calling .GetState
	I0401 20:38:06.405697   45958 status.go:371] multinode-546775-m02 host status = "Stopped" (err=<nil>)
	I0401 20:38:06.405707   45958 status.go:384] host is not running, skipping remaining checks
	I0401 20:38:06.405712   45958 status.go:176] multinode-546775-m02 status: &{Name:multinode-546775-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (195.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546775 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0401 20:39:06.657534   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546775 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.231203452s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-546775 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (195.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-546775
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546775-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-546775-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.315505ms)

                                                
                                                
-- stdout --
	* [multinode-546775-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-546775-m02' is duplicated with machine name 'multinode-546775-m02' in profile 'multinode-546775'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-546775-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-546775-m03 --driver=kvm2  --container-runtime=crio: (43.823797958s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-546775
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-546775: exit status 80 (212.307637ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-546775 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-546775-m03 already exists in multinode-546775-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-546775-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.89s)

                                                
                                    
x
+
TestScheduledStopUnix (115.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-697388 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-697388 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.076234586s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697388 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-697388 -n scheduled-stop-697388
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697388 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0401 20:46:21.902470   16301 retry.go:31] will retry after 89.3µs: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.903654   16301 retry.go:31] will retry after 84.444µs: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.904820   16301 retry.go:31] will retry after 160.056µs: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.905943   16301 retry.go:31] will retry after 452.852µs: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.907060   16301 retry.go:31] will retry after 672.07µs: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.908173   16301 retry.go:31] will retry after 1.010513ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.909297   16301 retry.go:31] will retry after 915.817µs: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.910464   16301 retry.go:31] will retry after 2.112483ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.913662   16301 retry.go:31] will retry after 3.159887ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.917902   16301 retry.go:31] will retry after 2.328803ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.921105   16301 retry.go:31] will retry after 4.290904ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.926284   16301 retry.go:31] will retry after 4.523105ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.931517   16301 retry.go:31] will retry after 10.687665ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.942775   16301 retry.go:31] will retry after 20.844087ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
I0401 20:46:21.964054   16301 retry.go:31] will retry after 30.297865ms: open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/scheduled-stop-697388/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697388 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-697388 -n scheduled-stop-697388
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-697388
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-697388 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0401 20:47:27.799800   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-697388
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-697388: exit status 7 (63.664653ms)

                                                
                                                
-- stdout --
	scheduled-stop-697388
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-697388 -n scheduled-stop-697388
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-697388 -n scheduled-stop-697388: exit status 7 (63.898764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-697388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-697388
--- PASS: TestScheduledStopUnix (115.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (222.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.590055267 start -p running-upgrade-877059 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0401 20:49:06.658580   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.590055267 start -p running-upgrade-877059 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.560902651s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-877059 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-877059 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.503104664s)
I0401 20:51:14.893581   16301 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate552366716/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0003f7a28 gz:0xc0003f7ab0 tar:0xc0003f7a60 tar.bz2:0xc0003f7a70 tar.gz:0xc0003f7a80 tar.xz:0xc0003f7a90 tar.zst:0xc0003f7aa0 tbz2:0xc0003f7a70 tgz:0xc0003f7a80 txz:0xc0003f7a90 tzst:0xc0003f7aa0 xz:0xc0003f7ab8 zip:0xc0003f7ac0 zst:0xc0003f7ad0] Getters:map[file:0xc0009d7d10 http:0xc001af0320 https:0xc001af0370] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0401 20:51:14.893624   16301 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate552366716/002/docker-machine-driver-kvm2
helpers_test.go:175: Cleaning up "running-upgrade-877059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-877059
--- PASS: TestRunningBinaryUpgrade (222.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-850365 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-850365 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (83.539693ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-850365] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-850365 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-850365 --driver=kvm2  --container-runtime=crio: (1m33.508711416s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-850365 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-850365 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-850365 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.782408744s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-850365 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-850365 status -o json: exit status 2 (256.155785ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-850365","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-850365
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-850365: (1.129751507s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.17s)

                                                
                                    
x
+
TestPause/serial/Start (54.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-854311 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-854311 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (54.615526842s)
--- PASS: TestPause/serial/Start (54.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-850365 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-850365 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.546760707s)
--- PASS: TestNoKubernetes/serial/Start (51.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-850365 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-850365 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.632426ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.026673751s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-850365
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-850365: (1.732266684s)
--- PASS: TestNoKubernetes/serial/Stop (1.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (27.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-850365 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-850365 --driver=kvm2  --container-runtime=crio: (27.436165863s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (27.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-850365 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-850365 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.100022ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-269490 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-269490 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (103.461097ms)

                                                
                                                
-- stdout --
	* [false-269490] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:51:04.805660   53806 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:51:04.805758   53806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:51:04.805764   53806 out.go:358] Setting ErrFile to fd 2...
	I0401 20:51:04.805770   53806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:51:04.805984   53806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-9129/.minikube/bin
	I0401 20:51:04.806601   53806 out.go:352] Setting JSON to false
	I0401 20:51:04.807545   53806 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5609,"bootTime":1743535056,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:51:04.807602   53806 start.go:139] virtualization: kvm guest
	I0401 20:51:04.809730   53806 out.go:177] * [false-269490] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:51:04.811286   53806 notify.go:220] Checking for updates...
	I0401 20:51:04.811303   53806 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:51:04.813159   53806 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:51:04.814736   53806 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-9129/kubeconfig
	I0401 20:51:04.816196   53806 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-9129/.minikube
	I0401 20:51:04.817451   53806 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:51:04.818704   53806 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:51:04.820282   53806 config.go:182] Loaded profile config "force-systemd-env-818542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:51:04.820411   53806 config.go:182] Loaded profile config "kubernetes-upgrade-881088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:51:04.820508   53806 config.go:182] Loaded profile config "running-upgrade-877059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0401 20:51:04.820624   53806 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:51:04.858111   53806 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 20:51:04.859483   53806 start.go:297] selected driver: kvm2
	I0401 20:51:04.859498   53806 start.go:901] validating driver "kvm2" against <nil>
	I0401 20:51:04.859517   53806 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:51:04.861431   53806 out.go:201] 
	W0401 20:51:04.862806   53806 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0401 20:51:04.864118   53806 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-269490 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-269490" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.171:8443
name: running-upgrade-877059
contexts:
- context:
cluster: running-upgrade-877059
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-877059
name: running-upgrade-877059
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-877059
user:
client-certificate: /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/running-upgrade-877059/client.crt
client-key: /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/running-upgrade-877059/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-269490

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269490"

                                                
                                                
----------------------- debugLogs end: false-269490 [took: 2.772827136s] --------------------------------
helpers_test.go:175: Cleaning up "false-269490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-269490
--- PASS: TestNetworkPlugins/group/false (3.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3974004723 start -p stopped-upgrade-321311 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3974004723 start -p stopped-upgrade-321311 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m16.922027601s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3974004723 -p stopped-upgrade-321311 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3974004723 -p stopped-upgrade-321311 stop: (2.182809939s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-321311 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-321311 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.262321189s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-321311
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (111.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-881142 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0401 20:54:06.657504   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-881142 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m51.886819482s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (111.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-248912 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-248912 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m30.905294727s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-248912 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [445e586b-b804-4a76-ae5b-ab3cc8655bd4] Pending
helpers_test.go:344: "busybox" [445e586b-b804-4a76-ae5b-ab3cc8655bd4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [445e586b-b804-4a76-ae5b-ab3cc8655bd4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.002693838s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-248912 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-704555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-704555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (56.088482283s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-248912 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-248912 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-248912 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-248912 --alsologtostderr -v=3: (1m30.866775549s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-881142 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e23d5850-fdcb-45aa-b4a3-f2a7302040b9] Pending
helpers_test.go:344: "busybox" [e23d5850-fdcb-45aa-b4a3-f2a7302040b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e23d5850-fdcb-45aa-b4a3-f2a7302040b9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004665629s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-881142 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-881142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-881142 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-881142 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-881142 --alsologtostderr -v=3: (1m31.029521845s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-704555 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48bbcbc2-105f-49b6-9b9f-a9c3caf184ba] Pending
helpers_test.go:344: "busybox" [48bbcbc2-105f-49b6-9b9f-a9c3caf184ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48bbcbc2-105f-49b6-9b9f-a9c3caf184ba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004858995s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-704555 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-704555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-704555 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-704555 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-704555 --alsologtostderr -v=3: (1m31.055121145s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-248912 -n embed-certs-248912
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-248912 -n embed-certs-248912: exit status 7 (61.672007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-248912 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (334.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-248912 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0401 20:57:27.799396   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-248912 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m34.503634538s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-248912 -n embed-certs-248912
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (334.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-881142 -n no-preload-881142
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-881142 -n no-preload-881142: exit status 7 (65.890755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-881142 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (361.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-881142 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-881142 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (6m0.939371032s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-881142 -n no-preload-881142
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (361.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555: exit status 7 (68.823938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-704555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-704555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-704555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m39.496501527s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-582207 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-582207 --alsologtostderr -v=3: (2.304993842s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-582207 -n old-k8s-version-582207: exit status 7 (67.329629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-582207 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dgb58" [438440c3-ce91-4a3e-9a50-a9ab21695660] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003929059s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-dgb58" [438440c3-ce91-4a3e-9a50-a9ab21695660] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005101805s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-248912 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-248912 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-248912 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-248912 -n embed-certs-248912
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-248912 -n embed-certs-248912: exit status 2 (255.867407ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-248912 -n embed-certs-248912
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-248912 -n embed-certs-248912: exit status 2 (261.899831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-248912 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-248912 -n embed-certs-248912
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-248912 -n embed-certs-248912
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-546869 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-546869 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (48.852694246s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6g82t" [f5805460-20d5-493f-bd4d-96c0a802df5f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004029273s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-6g82t" [f5805460-20d5-493f-bd4d-96c0a802df5f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006504304s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-881142 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-881142 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-881142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-881142 -n no-preload-881142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-881142 -n no-preload-881142: exit status 2 (270.462618ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-881142 -n no-preload-881142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-881142 -n no-preload-881142: exit status 2 (272.067704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-881142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-881142 --alsologtostderr -v=1: (1.017921589s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-881142 -n no-preload-881142
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-881142 -n no-preload-881142
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m27.978771752s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cqtrz" [6da95ab7-b654-47dc-8e32-61174b1ed319] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cqtrz" [6da95ab7-b654-47dc-8e32-61174b1ed319] Running
E0401 21:04:06.658512   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/addons-357468/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004864659s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-546869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-546869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.662136116s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-546869 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-546869 --alsologtostderr -v=3: (11.383861101s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-cqtrz" [6da95ab7-b654-47dc-8e32-61174b1ed319] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005155562s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-704555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-704555 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-704555 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555: exit status 2 (245.576227ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555: exit status 2 (246.690254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-704555 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-704555 -n default-k8s-diff-port-704555
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-546869 -n newest-cni-546869
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-546869 -n newest-cni-546869: exit status 7 (63.892256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-546869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-546869 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-546869 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (45.635660667s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-546869 -n newest-cni-546869
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (95.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m35.069197576s)
--- PASS: TestNetworkPlugins/group/flannel/Start (95.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-546869 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-546869 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-546869 -n newest-cni-546869
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-546869 -n newest-cni-546869: exit status 2 (235.293965ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-546869 -n newest-cni-546869
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-546869 -n newest-cni-546869: exit status 2 (241.325828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-546869 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-546869 -n newest-cni-546869
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-546869 -n newest-cni-546869
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m27.222563155s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-269490 "pgrep -a kubelet"
I0401 21:05:23.949721   16301 config.go:182] Loaded profile config "auto-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-269490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7qbh5" [9b973a5e-4af0-47e6-ad74-e9e462fd2021] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7qbh5" [9b973a5e-4af0-47e6-ad74-e9e462fd2021] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003739221s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-269490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.941837319s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qtklb" [1a597a52-795f-4633-b5b8-7c626fcb0091] Running
E0401 21:05:55.863217   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:55.869650   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:55.881083   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:55.902567   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:55.944063   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:56.026196   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:56.187764   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:56.509345   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:57.151361   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:05:58.433459   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005207304s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-269490 "pgrep -a kubelet"
E0401 21:06:00.994692   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
I0401 21:06:01.188146   16301 config.go:182] Loaded profile config "flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-269490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-phf26" [8e2459a3-93a7-43fe-8adb-aaa5e7c8a666] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-phf26" [8e2459a3-93a7-43fe-8adb-aaa5e7c8a666] Running
E0401 21:06:06.116727   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003930846s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-269490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m26.52443094s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-269490 "pgrep -a kubelet"
I0401 21:06:34.439409   16301 config.go:182] Loaded profile config "enable-default-cni-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-269490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zn8z4" [a8f799f3-870b-4293-a2d1-161b5d19bb15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0401 21:06:36.840933   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/no-preload-881142/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:39.529863   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:39.536348   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:39.547778   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:39.569241   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:39.610787   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-zn8z4" [a8f799f3-870b-4293-a2d1-161b5d19bb15] Running
E0401 21:06:39.692028   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:39.853589   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:40.174969   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:40.817124   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:42.098697   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
E0401 21:06:44.660149   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003839297s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-269490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-269490 "pgrep -a kubelet"
I0401 21:06:53.158136   16301 config.go:182] Loaded profile config "bridge-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-269490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2bbqf" [ed1dbbe1-2dc3-4638-a81a-1741425d291c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2bbqf" [ed1dbbe1-2dc3-4638-a81a-1741425d291c] Running
E0401 21:07:00.023141   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003918654s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m9.772622951s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-269490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0401 21:07:27.798930   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/functional-366801/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-269490 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m19.717107805s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8lpnw" [75dee764-9af1-4f9d-8248-8f333c9b3a75] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00504127s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-269490 "pgrep -a kubelet"
E0401 21:08:01.467217   16301 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/default-k8s-diff-port-704555/client.crt: no such file or directory" logger="UnhandledError"
I0401 21:08:01.763046   16301 config.go:182] Loaded profile config "calico-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-269490 replace --force -f testdata/netcat-deployment.yaml
I0401 21:08:02.319052   16301 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lvr8k" [e6a7432b-fc3b-4cab-b8e0-45d920bb7717] Pending
helpers_test.go:344: "netcat-5d86dc444-lvr8k" [e6a7432b-fc3b-4cab-b8e0-45d920bb7717] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lvr8k" [e6a7432b-fc3b-4cab-b8e0-45d920bb7717] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004287035s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nqt4k" [77a8572e-36d9-4789-a305-c00c892b67ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003382188s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-269490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-269490 "pgrep -a kubelet"
I0401 21:08:18.056893   16301 config.go:182] Loaded profile config "kindnet-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-269490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t9m2n" [31ed716c-76fe-4437-844a-505b2992c0f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t9m2n" [31ed716c-76fe-4437-844a-505b2992c0f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003586964s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-269490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-269490 "pgrep -a kubelet"
I0401 21:08:40.784903   16301 config.go:182] Loaded profile config "custom-flannel-269490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-269490 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bk2f4" [b786e458-0b9d-4bc3-8b37-50ac6fd4075d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bk2f4" [b786e458-0b9d-4bc3-8b37-50ac6fd4075d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00285377s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-269490 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-269490 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
262 TestStartStop/group/disable-driver-mounts 0.14
275 TestNetworkPlugins/group/kubenet 3.07
283 TestNetworkPlugins/group/cilium 3.38
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-357468 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-468156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-468156
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-269490 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-269490" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.171:8443
name: running-upgrade-877059
contexts:
- context:
cluster: running-upgrade-877059
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-877059
name: running-upgrade-877059
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-877059
user:
client-certificate: /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/running-upgrade-877059/client.crt
client-key: /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/running-upgrade-877059/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-269490

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269490"

                                                
                                                
----------------------- debugLogs end: kubenet-269490 [took: 2.924154783s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-269490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-269490
--- SKIP: TestNetworkPlugins/group/kubenet (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-269490 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-269490" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-9129/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.171:8443
name: running-upgrade-877059
contexts:
- context:
cluster: running-upgrade-877059
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: running-upgrade-877059
name: running-upgrade-877059
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-877059
user:
client-certificate: /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/running-upgrade-877059/client.crt
client-key: /home/jenkins/minikube-integration/20506-9129/.minikube/profiles/running-upgrade-877059/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-269490

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-269490" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269490"

                                                
                                                
----------------------- debugLogs end: cilium-269490 [took: 3.227660657s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-269490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-269490
--- SKIP: TestNetworkPlugins/group/cilium (3.38s)

                                                
                                    
Copied to clipboard